00:00:00.000 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2001 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3267 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.049 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.050 The recommended git tool is: git 00:00:00.051 using credential 00000000-0000-0000-0000-000000000002 00:00:00.054 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.068 Fetching changes from the remote Git repository 00:00:00.072 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.094 Using shallow fetch with depth 1 00:00:00.094 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.094 > git --version # timeout=10 00:00:00.127 > git --version # 'git version 2.39.2' 00:00:00.127 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.157 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.157 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.964 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.977 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.991 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:02.991 > git config core.sparsecheckout # timeout=10 00:00:03.002 > git read-tree -mu HEAD # timeout=10 00:00:03.019 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:03.038 Commit message: "inventory: add WCP3 to free inventory" 00:00:03.038 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:03.142 [Pipeline] Start of Pipeline 00:00:03.155 [Pipeline] library 00:00:03.156 Loading library shm_lib@master 00:00:03.156 Library shm_lib@master is cached. Copying from home. 00:00:03.169 [Pipeline] node 00:00:03.176 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:03.177 [Pipeline] { 00:00:03.187 [Pipeline] catchError 00:00:03.189 [Pipeline] { 00:00:03.199 [Pipeline] wrap 00:00:03.208 [Pipeline] { 00:00:03.215 [Pipeline] stage 00:00:03.216 [Pipeline] { (Prologue) 00:00:03.439 [Pipeline] sh 00:00:03.722 + logger -p user.info -t JENKINS-CI 00:00:03.742 [Pipeline] echo 00:00:03.743 Node: GP11 00:00:03.751 [Pipeline] sh 00:00:04.046 [Pipeline] setCustomBuildProperty 00:00:04.056 [Pipeline] echo 00:00:04.058 Cleanup processes 00:00:04.064 [Pipeline] sh 00:00:04.345 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.345 500037 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.357 [Pipeline] sh 00:00:04.635 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.635 ++ awk '{print $1}' 00:00:04.635 ++ grep -v 'sudo pgrep' 00:00:04.635 + sudo kill -9 00:00:04.635 + true 00:00:04.645 [Pipeline] cleanWs 00:00:04.652 [WS-CLEANUP] Deleting project workspace... 00:00:04.652 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.657 [WS-CLEANUP] done 00:00:04.660 [Pipeline] setCustomBuildProperty 00:00:04.672 [Pipeline] sh 00:00:04.948 + sudo git config --global --replace-all safe.directory '*' 00:00:05.045 [Pipeline] httpRequest 00:00:05.071 [Pipeline] echo 00:00:05.073 Sorcerer 10.211.164.101 is alive 00:00:05.079 [Pipeline] httpRequest 00:00:05.083 HttpMethod: GET 00:00:05.083 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:05.084 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:05.100 Response Code: HTTP/1.1 200 OK 00:00:05.100 Success: Status code 200 is in the accepted range: 200,404 00:00:05.100 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:09.954 [Pipeline] sh 00:00:10.238 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:10.254 [Pipeline] httpRequest 00:00:10.271 [Pipeline] echo 00:00:10.273 Sorcerer 10.211.164.101 is alive 00:00:10.282 [Pipeline] httpRequest 00:00:10.287 HttpMethod: GET 00:00:10.288 URL: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:10.288 Sending request to url: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:10.308 Response Code: HTTP/1.1 200 OK 00:00:10.308 Success: Status code 200 is in the accepted range: 200,404 00:00:10.308 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:01:29.562 [Pipeline] sh 00:01:29.844 + tar --no-same-owner -xf spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:01:32.386 [Pipeline] sh 00:01:32.671 + git -C spdk log --oneline -n5 00:01:32.671 719d03c6a sock/uring: only register net impl if supported 00:01:32.671 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:01:32.671 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:01:32.671 6c7c1f57e accel: add sequence outstanding stat 00:01:32.671 3bc8e6a26 accel: add utility to put task 00:01:32.690 [Pipeline] withCredentials 00:01:32.699 > git --version # timeout=10 00:01:32.711 > git --version # 'git version 2.39.2' 00:01:32.726 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:32.728 [Pipeline] { 00:01:32.738 [Pipeline] retry 00:01:32.740 [Pipeline] { 00:01:32.760 [Pipeline] sh 00:01:33.041 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:33.313 [Pipeline] } 00:01:33.335 [Pipeline] // retry 00:01:33.340 [Pipeline] } 00:01:33.363 [Pipeline] // withCredentials 00:01:33.374 [Pipeline] httpRequest 00:01:33.406 [Pipeline] echo 00:01:33.408 Sorcerer 10.211.164.101 is alive 00:01:33.415 [Pipeline] httpRequest 00:01:33.420 HttpMethod: GET 00:01:33.421 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:33.421 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:33.423 Response Code: HTTP/1.1 200 OK 00:01:33.423 Success: Status code 200 is in the accepted range: 200,404 00:01:33.424 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:38.886 [Pipeline] sh 00:01:39.192 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:41.103 [Pipeline] sh 00:01:41.383 + git -C dpdk log --oneline -n5 00:01:41.383 caf0f5d395 version: 22.11.4 00:01:41.383 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:41.383 dc9c799c7d vhost: fix missing spinlock unlock 00:01:41.383 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:41.383 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:41.395 [Pipeline] } 00:01:41.413 [Pipeline] // stage 00:01:41.424 [Pipeline] stage 00:01:41.426 [Pipeline] { (Prepare) 00:01:41.450 [Pipeline] writeFile 00:01:41.469 [Pipeline] sh 00:01:41.748 + logger -p user.info -t JENKINS-CI 00:01:41.761 [Pipeline] sh 00:01:42.043 + logger -p user.info -t JENKINS-CI 00:01:42.055 [Pipeline] sh 00:01:42.337 + cat autorun-spdk.conf 00:01:42.337 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:42.337 SPDK_TEST_NVMF=1 00:01:42.337 SPDK_TEST_NVME_CLI=1 00:01:42.337 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:42.337 SPDK_TEST_NVMF_NICS=e810 00:01:42.337 SPDK_TEST_VFIOUSER=1 00:01:42.337 SPDK_RUN_UBSAN=1 00:01:42.337 NET_TYPE=phy 00:01:42.337 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:42.337 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:42.344 RUN_NIGHTLY=1 00:01:42.349 [Pipeline] readFile 00:01:42.378 [Pipeline] withEnv 00:01:42.380 [Pipeline] { 00:01:42.395 [Pipeline] sh 00:01:42.677 + set -ex 00:01:42.677 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:42.677 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:42.677 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:42.677 ++ SPDK_TEST_NVMF=1 00:01:42.677 ++ SPDK_TEST_NVME_CLI=1 00:01:42.677 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:42.677 ++ SPDK_TEST_NVMF_NICS=e810 00:01:42.677 ++ SPDK_TEST_VFIOUSER=1 00:01:42.677 ++ SPDK_RUN_UBSAN=1 00:01:42.677 ++ NET_TYPE=phy 00:01:42.677 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:42.677 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:42.677 ++ RUN_NIGHTLY=1 00:01:42.677 + case $SPDK_TEST_NVMF_NICS in 00:01:42.677 + DRIVERS=ice 00:01:42.677 + [[ tcp == \r\d\m\a ]] 00:01:42.677 + [[ -n ice ]] 00:01:42.677 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:42.677 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:42.677 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:42.677 rmmod: ERROR: Module irdma is not currently loaded 00:01:42.677 rmmod: ERROR: Module i40iw is not currently loaded 00:01:42.677 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:42.677 + true 00:01:42.677 + for D in $DRIVERS 00:01:42.677 + sudo modprobe ice 00:01:42.677 + exit 0 00:01:42.686 [Pipeline] } 00:01:42.706 [Pipeline] // withEnv 00:01:42.711 [Pipeline] } 00:01:42.729 [Pipeline] // stage 00:01:42.740 [Pipeline] catchError 00:01:42.742 [Pipeline] { 00:01:42.757 [Pipeline] timeout 00:01:42.758 Timeout set to expire in 50 min 00:01:42.759 [Pipeline] { 00:01:42.775 [Pipeline] stage 00:01:42.777 [Pipeline] { (Tests) 00:01:42.796 [Pipeline] sh 00:01:43.077 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:43.077 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:43.077 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:43.077 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:43.077 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:43.077 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:43.077 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:43.077 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:43.077 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:43.077 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:43.077 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:43.077 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:43.077 + source /etc/os-release 00:01:43.077 ++ NAME='Fedora Linux' 00:01:43.077 ++ VERSION='38 (Cloud Edition)' 00:01:43.077 ++ ID=fedora 00:01:43.077 ++ VERSION_ID=38 00:01:43.077 ++ VERSION_CODENAME= 00:01:43.077 ++ PLATFORM_ID=platform:f38 00:01:43.077 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:43.077 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:43.077 ++ LOGO=fedora-logo-icon 00:01:43.077 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:43.077 ++ HOME_URL=https://fedoraproject.org/ 00:01:43.077 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:43.077 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:43.077 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:43.077 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:43.077 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:43.077 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:43.077 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:43.077 ++ SUPPORT_END=2024-05-14 00:01:43.077 ++ VARIANT='Cloud Edition' 00:01:43.077 ++ VARIANT_ID=cloud 00:01:43.077 + uname -a 00:01:43.077 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:43.077 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:44.011 Hugepages 00:01:44.011 node hugesize free / total 00:01:44.011 node0 1048576kB 0 / 0 00:01:44.011 node0 2048kB 0 / 0 00:01:44.011 node1 1048576kB 0 / 0 00:01:44.011 node1 2048kB 0 / 0 00:01:44.011 00:01:44.011 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:44.011 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:44.011 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:44.011 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:44.011 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:44.011 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:44.011 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:44.011 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:44.011 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:44.012 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:44.012 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:44.012 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:44.012 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:44.012 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:44.012 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:44.012 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:44.012 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:44.012 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:44.012 + rm -f /tmp/spdk-ld-path 00:01:44.012 + source autorun-spdk.conf 00:01:44.012 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:44.012 ++ SPDK_TEST_NVMF=1 00:01:44.012 ++ SPDK_TEST_NVME_CLI=1 00:01:44.012 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:44.012 ++ SPDK_TEST_NVMF_NICS=e810 00:01:44.012 ++ SPDK_TEST_VFIOUSER=1 00:01:44.012 ++ SPDK_RUN_UBSAN=1 00:01:44.012 ++ NET_TYPE=phy 00:01:44.012 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:44.012 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:44.012 ++ RUN_NIGHTLY=1 00:01:44.012 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:44.012 + [[ -n '' ]] 00:01:44.012 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:44.271 + for M in /var/spdk/build-*-manifest.txt 00:01:44.271 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:44.271 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:44.271 + for M in /var/spdk/build-*-manifest.txt 00:01:44.271 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:44.271 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:44.271 ++ uname 00:01:44.271 + [[ Linux == \L\i\n\u\x ]] 00:01:44.271 + sudo dmesg -T 00:01:44.271 + sudo dmesg --clear 00:01:44.271 + dmesg_pid=500743 00:01:44.271 + [[ Fedora Linux == FreeBSD ]] 00:01:44.271 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:44.271 + sudo dmesg -Tw 00:01:44.271 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:44.271 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:44.271 + [[ -x /usr/src/fio-static/fio ]] 00:01:44.271 + export FIO_BIN=/usr/src/fio-static/fio 00:01:44.271 + FIO_BIN=/usr/src/fio-static/fio 00:01:44.271 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:44.271 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:44.271 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:44.271 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:44.271 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:44.271 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:44.271 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:44.271 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:44.271 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:44.271 Test configuration: 00:01:44.271 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:44.271 SPDK_TEST_NVMF=1 00:01:44.271 SPDK_TEST_NVME_CLI=1 00:01:44.271 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:44.271 SPDK_TEST_NVMF_NICS=e810 00:01:44.271 SPDK_TEST_VFIOUSER=1 00:01:44.271 SPDK_RUN_UBSAN=1 00:01:44.271 NET_TYPE=phy 00:01:44.271 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:44.271 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:44.271 RUN_NIGHTLY=1 09:11:28 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:44.271 09:11:28 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:44.271 09:11:28 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:44.271 09:11:28 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:44.271 09:11:28 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.271 09:11:28 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.271 09:11:28 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.271 09:11:28 -- paths/export.sh@5 -- $ export PATH 00:01:44.271 09:11:28 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.271 09:11:28 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:44.271 09:11:28 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:44.271 09:11:28 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720941088.XXXXXX 00:01:44.271 09:11:28 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720941088.igLQxV 00:01:44.271 09:11:28 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:44.271 09:11:28 -- common/autobuild_common.sh@450 -- $ '[' -n v22.11.4 ']' 00:01:44.271 09:11:28 -- common/autobuild_common.sh@451 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:44.271 09:11:28 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:44.271 09:11:28 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:44.271 09:11:28 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:44.271 09:11:28 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:44.271 09:11:28 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:44.271 09:11:28 -- common/autotest_common.sh@10 -- $ set +x 00:01:44.271 09:11:28 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:44.271 09:11:28 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:44.271 09:11:28 -- pm/common@17 -- $ local monitor 00:01:44.271 09:11:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.271 09:11:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.271 09:11:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.271 09:11:28 -- pm/common@21 -- $ date +%s 00:01:44.271 09:11:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.271 09:11:28 -- pm/common@21 -- $ date +%s 00:01:44.271 09:11:28 -- pm/common@25 -- $ sleep 1 00:01:44.271 09:11:28 -- pm/common@21 -- $ date +%s 00:01:44.271 09:11:28 -- pm/common@21 -- $ date +%s 00:01:44.271 09:11:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720941088 00:01:44.271 09:11:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720941088 00:01:44.271 09:11:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720941088 00:01:44.271 09:11:28 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720941088 00:01:44.271 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720941088_collect-vmstat.pm.log 00:01:44.271 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720941088_collect-cpu-load.pm.log 00:01:44.271 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720941088_collect-cpu-temp.pm.log 00:01:44.271 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720941088_collect-bmc-pm.bmc.pm.log 00:01:45.209 09:11:29 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:45.209 09:11:29 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:45.209 09:11:29 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:45.209 09:11:29 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:45.209 09:11:29 -- spdk/autobuild.sh@16 -- $ date -u 00:01:45.209 Sun Jul 14 07:11:29 AM UTC 2024 00:01:45.209 09:11:29 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:45.209 v24.09-pre-202-g719d03c6a 00:01:45.209 09:11:29 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:45.209 09:11:29 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:45.209 09:11:29 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:45.209 09:11:29 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:45.209 09:11:29 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:45.209 09:11:29 -- common/autotest_common.sh@10 -- $ set +x 00:01:45.209 ************************************ 00:01:45.209 START TEST ubsan 00:01:45.209 ************************************ 00:01:45.209 09:11:29 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:45.209 using ubsan 00:01:45.209 00:01:45.209 real 0m0.000s 00:01:45.209 user 0m0.000s 00:01:45.209 sys 0m0.000s 00:01:45.209 09:11:29 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:45.209 09:11:29 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:45.209 ************************************ 00:01:45.209 END TEST ubsan 00:01:45.209 ************************************ 00:01:45.468 09:11:29 -- common/autotest_common.sh@1142 -- $ return 0 00:01:45.468 09:11:29 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:45.468 09:11:29 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:45.468 09:11:29 -- common/autobuild_common.sh@436 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:45.468 09:11:29 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:01:45.468 09:11:29 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:45.468 09:11:29 -- common/autotest_common.sh@10 -- $ set +x 00:01:45.468 ************************************ 00:01:45.468 START TEST build_native_dpdk 00:01:45.468 ************************************ 00:01:45.468 09:11:29 build_native_dpdk -- common/autotest_common.sh@1123 -- $ _build_native_dpdk 00:01:45.468 09:11:29 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:45.468 09:11:29 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:45.468 09:11:29 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:45.468 09:11:29 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:45.468 09:11:29 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:45.468 09:11:29 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:45.468 09:11:29 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:45.468 09:11:29 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:45.468 09:11:29 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:45.468 09:11:29 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:45.468 09:11:29 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:45.468 09:11:29 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:45.468 09:11:29 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:45.468 09:11:29 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:45.468 09:11:29 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:45.468 09:11:29 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:45.468 09:11:29 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:45.468 09:11:29 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:45.468 09:11:29 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:45.468 09:11:29 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:45.468 caf0f5d395 version: 22.11.4 00:01:45.469 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:45.469 dc9c799c7d vhost: fix missing spinlock unlock 00:01:45.469 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:45.469 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:45.469 09:11:29 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:45.469 09:11:29 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:45.469 09:11:29 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:45.469 09:11:29 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:45.469 09:11:29 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:45.469 09:11:29 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:45.469 09:11:29 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:45.469 09:11:29 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:45.469 09:11:29 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:45.469 09:11:29 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:45.469 09:11:29 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:45.469 09:11:29 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:45.469 09:11:29 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:45.469 09:11:29 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:45.469 09:11:29 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:45.469 09:11:29 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:45.469 09:11:29 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:45.469 09:11:29 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:01:45.469 09:11:29 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:01:45.469 09:11:29 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:45.469 09:11:29 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:45.469 09:11:29 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:45.469 09:11:29 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:45.469 09:11:29 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:45.469 09:11:29 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:45.469 09:11:29 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:45.469 09:11:29 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:45.469 09:11:29 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:45.469 09:11:29 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:45.469 09:11:29 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:45.469 09:11:29 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:45.469 09:11:29 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:45.469 09:11:29 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:45.469 09:11:29 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:01:45.469 09:11:29 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:01:45.469 09:11:29 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:45.469 09:11:29 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:01:45.469 09:11:29 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:01:45.469 09:11:29 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:45.469 09:11:29 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:45.469 09:11:29 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:45.469 09:11:29 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:45.469 09:11:29 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:45.469 09:11:29 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:45.469 09:11:29 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:45.469 09:11:29 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:45.469 patching file config/rte_config.h 00:01:45.469 Hunk #1 succeeded at 60 (offset 1 line). 00:01:45.469 09:11:29 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:01:45.469 09:11:29 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:01:45.469 09:11:29 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:01:45.469 09:11:29 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:45.469 09:11:29 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:49.663 The Meson build system 00:01:49.663 Version: 1.3.1 00:01:49.663 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:49.663 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:49.663 Build type: native build 00:01:49.663 Program cat found: YES (/usr/bin/cat) 00:01:49.663 Project name: DPDK 00:01:49.663 Project version: 22.11.4 00:01:49.663 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:49.663 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:49.663 Host machine cpu family: x86_64 00:01:49.664 Host machine cpu: x86_64 00:01:49.664 Message: ## Building in Developer Mode ## 00:01:49.664 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:49.664 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:49.664 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:49.664 Program objdump found: YES (/usr/bin/objdump) 00:01:49.664 Program python3 found: YES (/usr/bin/python3) 00:01:49.664 Program cat found: YES (/usr/bin/cat) 00:01:49.664 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:49.664 Checking for size of "void *" : 8 00:01:49.664 Checking for size of "void *" : 8 (cached) 00:01:49.664 Library m found: YES 00:01:49.664 Library numa found: YES 00:01:49.664 Has header "numaif.h" : YES 00:01:49.664 Library fdt found: NO 00:01:49.664 Library execinfo found: NO 00:01:49.664 Has header "execinfo.h" : YES 00:01:49.664 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:49.664 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:49.664 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:49.664 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:49.664 Run-time dependency openssl found: YES 3.0.9 00:01:49.664 Run-time dependency libpcap found: YES 1.10.4 00:01:49.664 Has header "pcap.h" with dependency libpcap: YES 00:01:49.664 Compiler for C supports arguments -Wcast-qual: YES 00:01:49.664 Compiler for C supports arguments -Wdeprecated: YES 00:01:49.664 Compiler for C supports arguments -Wformat: YES 00:01:49.664 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:49.664 Compiler for C supports arguments -Wformat-security: NO 00:01:49.664 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:49.664 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:49.664 Compiler for C supports arguments -Wnested-externs: YES 00:01:49.664 Compiler for C supports arguments -Wold-style-definition: YES 00:01:49.664 Compiler for C supports arguments -Wpointer-arith: YES 00:01:49.664 Compiler for C supports arguments -Wsign-compare: YES 00:01:49.664 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:49.664 Compiler for C supports arguments -Wundef: YES 00:01:49.664 Compiler for C supports arguments -Wwrite-strings: YES 00:01:49.664 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:49.664 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:49.664 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:49.664 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:49.664 Compiler for C supports arguments -mavx512f: YES 00:01:49.664 Checking if "AVX512 checking" compiles: YES 00:01:49.664 Fetching value of define "__SSE4_2__" : 1 00:01:49.664 Fetching value of define "__AES__" : 1 00:01:49.664 Fetching value of define "__AVX__" : 1 00:01:49.664 Fetching value of define "__AVX2__" : (undefined) 00:01:49.664 Fetching value of define "__AVX512BW__" : (undefined) 00:01:49.664 Fetching value of define "__AVX512CD__" : (undefined) 00:01:49.664 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:49.664 Fetching value of define "__AVX512F__" : (undefined) 00:01:49.664 Fetching value of define "__AVX512VL__" : (undefined) 00:01:49.664 Fetching value of define "__PCLMUL__" : 1 00:01:49.664 Fetching value of define "__RDRND__" : 1 00:01:49.664 Fetching value of define "__RDSEED__" : (undefined) 00:01:49.664 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:49.664 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:49.664 Message: lib/kvargs: Defining dependency "kvargs" 00:01:49.664 Message: lib/telemetry: Defining dependency "telemetry" 00:01:49.664 Checking for function "getentropy" : YES 00:01:49.664 Message: lib/eal: Defining dependency "eal" 00:01:49.664 Message: lib/ring: Defining dependency "ring" 00:01:49.664 Message: lib/rcu: Defining dependency "rcu" 00:01:49.664 Message: lib/mempool: Defining dependency "mempool" 00:01:49.664 Message: lib/mbuf: Defining dependency "mbuf" 00:01:49.664 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:49.664 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:49.664 Compiler for C supports arguments -mpclmul: YES 00:01:49.664 Compiler for C supports arguments -maes: YES 00:01:49.664 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:49.664 Compiler for C supports arguments -mavx512bw: YES 00:01:49.664 Compiler for C supports arguments -mavx512dq: YES 00:01:49.664 Compiler for C supports arguments -mavx512vl: YES 00:01:49.664 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:49.664 Compiler for C supports arguments -mavx2: YES 00:01:49.664 Compiler for C supports arguments -mavx: YES 00:01:49.664 Message: lib/net: Defining dependency "net" 00:01:49.664 Message: lib/meter: Defining dependency "meter" 00:01:49.664 Message: lib/ethdev: Defining dependency "ethdev" 00:01:49.664 Message: lib/pci: Defining dependency "pci" 00:01:49.664 Message: lib/cmdline: Defining dependency "cmdline" 00:01:49.664 Message: lib/metrics: Defining dependency "metrics" 00:01:49.664 Message: lib/hash: Defining dependency "hash" 00:01:49.664 Message: lib/timer: Defining dependency "timer" 00:01:49.664 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:49.664 Compiler for C supports arguments -mavx2: YES (cached) 00:01:49.664 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:49.664 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:49.664 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:49.664 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:49.664 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:49.664 Message: lib/acl: Defining dependency "acl" 00:01:49.664 Message: lib/bbdev: Defining dependency "bbdev" 00:01:49.664 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:49.664 Run-time dependency libelf found: YES 0.190 00:01:49.664 Message: lib/bpf: Defining dependency "bpf" 00:01:49.664 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:49.664 Message: lib/compressdev: Defining dependency "compressdev" 00:01:49.664 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:49.664 Message: lib/distributor: Defining dependency "distributor" 00:01:49.664 Message: lib/efd: Defining dependency "efd" 00:01:49.664 Message: lib/eventdev: Defining dependency "eventdev" 00:01:49.664 Message: lib/gpudev: Defining dependency "gpudev" 00:01:49.664 Message: lib/gro: Defining dependency "gro" 00:01:49.664 Message: lib/gso: Defining dependency "gso" 00:01:49.664 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:49.664 Message: lib/jobstats: Defining dependency "jobstats" 00:01:49.664 Message: lib/latencystats: Defining dependency "latencystats" 00:01:49.664 Message: lib/lpm: Defining dependency "lpm" 00:01:49.664 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:49.664 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:49.664 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:49.664 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:49.664 Message: lib/member: Defining dependency "member" 00:01:49.664 Message: lib/pcapng: Defining dependency "pcapng" 00:01:49.664 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:49.664 Message: lib/power: Defining dependency "power" 00:01:49.664 Message: lib/rawdev: Defining dependency "rawdev" 00:01:49.664 Message: lib/regexdev: Defining dependency "regexdev" 00:01:49.664 Message: lib/dmadev: Defining dependency "dmadev" 00:01:49.664 Message: lib/rib: Defining dependency "rib" 00:01:49.664 Message: lib/reorder: Defining dependency "reorder" 00:01:49.664 Message: lib/sched: Defining dependency "sched" 00:01:49.664 Message: lib/security: Defining dependency "security" 00:01:49.664 Message: lib/stack: Defining dependency "stack" 00:01:49.664 Has header "linux/userfaultfd.h" : YES 00:01:49.664 Message: lib/vhost: Defining dependency "vhost" 00:01:49.664 Message: lib/ipsec: Defining dependency "ipsec" 00:01:49.664 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:49.664 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:49.664 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:49.664 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:49.664 Message: lib/fib: Defining dependency "fib" 00:01:49.664 Message: lib/port: Defining dependency "port" 00:01:49.664 Message: lib/pdump: Defining dependency "pdump" 00:01:49.664 Message: lib/table: Defining dependency "table" 00:01:49.664 Message: lib/pipeline: Defining dependency "pipeline" 00:01:49.664 Message: lib/graph: Defining dependency "graph" 00:01:49.664 Message: lib/node: Defining dependency "node" 00:01:49.664 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:49.664 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:49.664 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:49.664 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:49.664 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:49.664 Compiler for C supports arguments -Wno-unused-value: YES 00:01:50.604 Compiler for C supports arguments -Wno-format: YES 00:01:50.604 Compiler for C supports arguments -Wno-format-security: YES 00:01:50.604 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:50.604 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:50.604 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:50.604 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:50.604 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:50.604 Compiler for C supports arguments -mavx2: YES (cached) 00:01:50.604 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:50.604 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:50.604 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:50.604 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:50.604 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:50.604 Program doxygen found: YES (/usr/bin/doxygen) 00:01:50.604 Configuring doxy-api.conf using configuration 00:01:50.604 Program sphinx-build found: NO 00:01:50.604 Configuring rte_build_config.h using configuration 00:01:50.604 Message: 00:01:50.604 ================= 00:01:50.604 Applications Enabled 00:01:50.604 ================= 00:01:50.604 00:01:50.604 apps: 00:01:50.604 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:01:50.604 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:01:50.604 test-security-perf, 00:01:50.604 00:01:50.604 Message: 00:01:50.604 ================= 00:01:50.605 Libraries Enabled 00:01:50.605 ================= 00:01:50.605 00:01:50.605 libs: 00:01:50.605 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:01:50.605 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:01:50.605 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:01:50.605 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:01:50.605 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:01:50.605 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:01:50.605 table, pipeline, graph, node, 00:01:50.605 00:01:50.605 Message: 00:01:50.605 =============== 00:01:50.605 Drivers Enabled 00:01:50.605 =============== 00:01:50.605 00:01:50.605 common: 00:01:50.605 00:01:50.605 bus: 00:01:50.605 pci, vdev, 00:01:50.605 mempool: 00:01:50.605 ring, 00:01:50.605 dma: 00:01:50.605 00:01:50.605 net: 00:01:50.605 i40e, 00:01:50.605 raw: 00:01:50.605 00:01:50.605 crypto: 00:01:50.605 00:01:50.605 compress: 00:01:50.605 00:01:50.605 regex: 00:01:50.605 00:01:50.605 vdpa: 00:01:50.605 00:01:50.605 event: 00:01:50.605 00:01:50.605 baseband: 00:01:50.605 00:01:50.605 gpu: 00:01:50.605 00:01:50.605 00:01:50.605 Message: 00:01:50.605 ================= 00:01:50.605 Content Skipped 00:01:50.605 ================= 00:01:50.605 00:01:50.605 apps: 00:01:50.605 00:01:50.605 libs: 00:01:50.605 kni: explicitly disabled via build config (deprecated lib) 00:01:50.605 flow_classify: explicitly disabled via build config (deprecated lib) 00:01:50.605 00:01:50.605 drivers: 00:01:50.605 common/cpt: not in enabled drivers build config 00:01:50.605 common/dpaax: not in enabled drivers build config 00:01:50.605 common/iavf: not in enabled drivers build config 00:01:50.605 common/idpf: not in enabled drivers build config 00:01:50.605 common/mvep: not in enabled drivers build config 00:01:50.605 common/octeontx: not in enabled drivers build config 00:01:50.605 bus/auxiliary: not in enabled drivers build config 00:01:50.605 bus/dpaa: not in enabled drivers build config 00:01:50.605 bus/fslmc: not in enabled drivers build config 00:01:50.605 bus/ifpga: not in enabled drivers build config 00:01:50.605 bus/vmbus: not in enabled drivers build config 00:01:50.605 common/cnxk: not in enabled drivers build config 00:01:50.605 common/mlx5: not in enabled drivers build config 00:01:50.605 common/qat: not in enabled drivers build config 00:01:50.605 common/sfc_efx: not in enabled drivers build config 00:01:50.605 mempool/bucket: not in enabled drivers build config 00:01:50.605 mempool/cnxk: not in enabled drivers build config 00:01:50.605 mempool/dpaa: not in enabled drivers build config 00:01:50.605 mempool/dpaa2: not in enabled drivers build config 00:01:50.605 mempool/octeontx: not in enabled drivers build config 00:01:50.605 mempool/stack: not in enabled drivers build config 00:01:50.605 dma/cnxk: not in enabled drivers build config 00:01:50.605 dma/dpaa: not in enabled drivers build config 00:01:50.605 dma/dpaa2: not in enabled drivers build config 00:01:50.605 dma/hisilicon: not in enabled drivers build config 00:01:50.605 dma/idxd: not in enabled drivers build config 00:01:50.605 dma/ioat: not in enabled drivers build config 00:01:50.605 dma/skeleton: not in enabled drivers build config 00:01:50.605 net/af_packet: not in enabled drivers build config 00:01:50.605 net/af_xdp: not in enabled drivers build config 00:01:50.605 net/ark: not in enabled drivers build config 00:01:50.605 net/atlantic: not in enabled drivers build config 00:01:50.605 net/avp: not in enabled drivers build config 00:01:50.605 net/axgbe: not in enabled drivers build config 00:01:50.605 net/bnx2x: not in enabled drivers build config 00:01:50.605 net/bnxt: not in enabled drivers build config 00:01:50.605 net/bonding: not in enabled drivers build config 00:01:50.605 net/cnxk: not in enabled drivers build config 00:01:50.605 net/cxgbe: not in enabled drivers build config 00:01:50.605 net/dpaa: not in enabled drivers build config 00:01:50.605 net/dpaa2: not in enabled drivers build config 00:01:50.605 net/e1000: not in enabled drivers build config 00:01:50.605 net/ena: not in enabled drivers build config 00:01:50.605 net/enetc: not in enabled drivers build config 00:01:50.605 net/enetfec: not in enabled drivers build config 00:01:50.605 net/enic: not in enabled drivers build config 00:01:50.605 net/failsafe: not in enabled drivers build config 00:01:50.605 net/fm10k: not in enabled drivers build config 00:01:50.605 net/gve: not in enabled drivers build config 00:01:50.605 net/hinic: not in enabled drivers build config 00:01:50.605 net/hns3: not in enabled drivers build config 00:01:50.605 net/iavf: not in enabled drivers build config 00:01:50.605 net/ice: not in enabled drivers build config 00:01:50.605 net/idpf: not in enabled drivers build config 00:01:50.605 net/igc: not in enabled drivers build config 00:01:50.605 net/ionic: not in enabled drivers build config 00:01:50.605 net/ipn3ke: not in enabled drivers build config 00:01:50.605 net/ixgbe: not in enabled drivers build config 00:01:50.605 net/kni: not in enabled drivers build config 00:01:50.605 net/liquidio: not in enabled drivers build config 00:01:50.605 net/mana: not in enabled drivers build config 00:01:50.605 net/memif: not in enabled drivers build config 00:01:50.605 net/mlx4: not in enabled drivers build config 00:01:50.605 net/mlx5: not in enabled drivers build config 00:01:50.605 net/mvneta: not in enabled drivers build config 00:01:50.605 net/mvpp2: not in enabled drivers build config 00:01:50.605 net/netvsc: not in enabled drivers build config 00:01:50.605 net/nfb: not in enabled drivers build config 00:01:50.605 net/nfp: not in enabled drivers build config 00:01:50.605 net/ngbe: not in enabled drivers build config 00:01:50.605 net/null: not in enabled drivers build config 00:01:50.605 net/octeontx: not in enabled drivers build config 00:01:50.605 net/octeon_ep: not in enabled drivers build config 00:01:50.605 net/pcap: not in enabled drivers build config 00:01:50.605 net/pfe: not in enabled drivers build config 00:01:50.605 net/qede: not in enabled drivers build config 00:01:50.605 net/ring: not in enabled drivers build config 00:01:50.605 net/sfc: not in enabled drivers build config 00:01:50.605 net/softnic: not in enabled drivers build config 00:01:50.605 net/tap: not in enabled drivers build config 00:01:50.605 net/thunderx: not in enabled drivers build config 00:01:50.605 net/txgbe: not in enabled drivers build config 00:01:50.605 net/vdev_netvsc: not in enabled drivers build config 00:01:50.605 net/vhost: not in enabled drivers build config 00:01:50.605 net/virtio: not in enabled drivers build config 00:01:50.605 net/vmxnet3: not in enabled drivers build config 00:01:50.605 raw/cnxk_bphy: not in enabled drivers build config 00:01:50.605 raw/cnxk_gpio: not in enabled drivers build config 00:01:50.605 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:50.605 raw/ifpga: not in enabled drivers build config 00:01:50.605 raw/ntb: not in enabled drivers build config 00:01:50.605 raw/skeleton: not in enabled drivers build config 00:01:50.605 crypto/armv8: not in enabled drivers build config 00:01:50.605 crypto/bcmfs: not in enabled drivers build config 00:01:50.605 crypto/caam_jr: not in enabled drivers build config 00:01:50.605 crypto/ccp: not in enabled drivers build config 00:01:50.605 crypto/cnxk: not in enabled drivers build config 00:01:50.605 crypto/dpaa_sec: not in enabled drivers build config 00:01:50.605 crypto/dpaa2_sec: not in enabled drivers build config 00:01:50.605 crypto/ipsec_mb: not in enabled drivers build config 00:01:50.605 crypto/mlx5: not in enabled drivers build config 00:01:50.605 crypto/mvsam: not in enabled drivers build config 00:01:50.605 crypto/nitrox: not in enabled drivers build config 00:01:50.605 crypto/null: not in enabled drivers build config 00:01:50.605 crypto/octeontx: not in enabled drivers build config 00:01:50.605 crypto/openssl: not in enabled drivers build config 00:01:50.605 crypto/scheduler: not in enabled drivers build config 00:01:50.605 crypto/uadk: not in enabled drivers build config 00:01:50.605 crypto/virtio: not in enabled drivers build config 00:01:50.605 compress/isal: not in enabled drivers build config 00:01:50.605 compress/mlx5: not in enabled drivers build config 00:01:50.605 compress/octeontx: not in enabled drivers build config 00:01:50.605 compress/zlib: not in enabled drivers build config 00:01:50.605 regex/mlx5: not in enabled drivers build config 00:01:50.605 regex/cn9k: not in enabled drivers build config 00:01:50.605 vdpa/ifc: not in enabled drivers build config 00:01:50.605 vdpa/mlx5: not in enabled drivers build config 00:01:50.605 vdpa/sfc: not in enabled drivers build config 00:01:50.605 event/cnxk: not in enabled drivers build config 00:01:50.605 event/dlb2: not in enabled drivers build config 00:01:50.605 event/dpaa: not in enabled drivers build config 00:01:50.605 event/dpaa2: not in enabled drivers build config 00:01:50.605 event/dsw: not in enabled drivers build config 00:01:50.605 event/opdl: not in enabled drivers build config 00:01:50.605 event/skeleton: not in enabled drivers build config 00:01:50.605 event/sw: not in enabled drivers build config 00:01:50.605 event/octeontx: not in enabled drivers build config 00:01:50.605 baseband/acc: not in enabled drivers build config 00:01:50.605 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:50.605 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:50.605 baseband/la12xx: not in enabled drivers build config 00:01:50.605 baseband/null: not in enabled drivers build config 00:01:50.605 baseband/turbo_sw: not in enabled drivers build config 00:01:50.605 gpu/cuda: not in enabled drivers build config 00:01:50.605 00:01:50.605 00:01:50.605 Build targets in project: 316 00:01:50.605 00:01:50.605 DPDK 22.11.4 00:01:50.605 00:01:50.605 User defined options 00:01:50.605 libdir : lib 00:01:50.605 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:50.605 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:50.605 c_link_args : 00:01:50.605 enable_docs : false 00:01:50.605 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:50.605 enable_kmods : false 00:01:50.605 machine : native 00:01:50.605 tests : false 00:01:50.605 00:01:50.605 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:50.605 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:50.605 09:11:34 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:50.605 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:50.605 [1/745] Generating lib/rte_kvargs_mingw with a custom command 00:01:50.605 [2/745] Generating lib/rte_telemetry_mingw with a custom command 00:01:50.605 [3/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:50.606 [4/745] Generating lib/rte_kvargs_def with a custom command 00:01:50.606 [5/745] Generating lib/rte_telemetry_def with a custom command 00:01:50.606 [6/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:50.863 [7/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:50.863 [8/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:50.863 [9/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:50.863 [10/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:50.863 [11/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:50.863 [12/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:50.863 [13/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:50.863 [14/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:50.863 [15/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:50.863 [16/745] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:50.863 [17/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:50.863 [18/745] Linking static target lib/librte_kvargs.a 00:01:50.863 [19/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:50.863 [20/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:50.863 [21/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:50.863 [22/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:50.863 [23/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:50.863 [24/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:50.863 [25/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:50.863 [26/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:50.863 [27/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:50.863 [28/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:50.863 [29/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:50.863 [30/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:50.863 [31/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:50.863 [32/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:01:50.863 [33/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:50.863 [34/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:50.863 [35/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:50.863 [36/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:50.863 [37/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:50.863 [38/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:50.863 [39/745] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:51.125 [40/745] Generating lib/rte_eal_def with a custom command 00:01:51.125 [41/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:51.125 [42/745] Generating lib/rte_eal_mingw with a custom command 00:01:51.125 [43/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:51.125 [44/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:51.125 [45/745] Generating lib/rte_ring_def with a custom command 00:01:51.125 [46/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:51.125 [47/745] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:51.125 [48/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:51.125 [49/745] Generating lib/rte_ring_mingw with a custom command 00:01:51.125 [50/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:51.125 [51/745] Generating lib/rte_rcu_def with a custom command 00:01:51.125 [52/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:51.125 [53/745] Generating lib/rte_rcu_mingw with a custom command 00:01:51.125 [54/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:51.125 [55/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:51.125 [56/745] Generating lib/rte_mempool_def with a custom command 00:01:51.125 [57/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:51.125 [58/745] Generating lib/rte_mempool_mingw with a custom command 00:01:51.125 [59/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:51.125 [60/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:51.125 [61/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:01:51.125 [62/745] Generating lib/rte_mbuf_mingw with a custom command 00:01:51.125 [63/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:51.125 [64/745] Generating lib/rte_mbuf_def with a custom command 00:01:51.125 [65/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:51.125 [66/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:51.125 [67/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:51.125 [68/745] Generating lib/rte_meter_mingw with a custom command 00:01:51.125 [69/745] Generating lib/rte_net_def with a custom command 00:01:51.125 [70/745] Generating lib/rte_net_mingw with a custom command 00:01:51.125 [71/745] Generating lib/rte_meter_def with a custom command 00:01:51.125 [72/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:51.125 [73/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:51.125 [74/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:51.125 [75/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:51.125 [76/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:51.125 [77/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:51.125 [78/745] Generating lib/rte_ethdev_def with a custom command 00:01:51.384 [79/745] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.384 [80/745] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:51.384 [81/745] Linking static target lib/librte_ring.a 00:01:51.384 [82/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:51.384 [83/745] Generating lib/rte_ethdev_mingw with a custom command 00:01:51.384 [84/745] Linking target lib/librte_kvargs.so.23.0 00:01:51.384 [85/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:51.384 [86/745] Generating lib/rte_pci_def with a custom command 00:01:51.384 [87/745] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:51.384 [88/745] Linking static target lib/librte_meter.a 00:01:51.384 [89/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:51.384 [90/745] Generating lib/rte_pci_mingw with a custom command 00:01:51.384 [91/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:51.384 [92/745] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:51.384 [93/745] Linking static target lib/librte_pci.a 00:01:51.645 [94/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:51.645 [95/745] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:01:51.645 [96/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:51.645 [97/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:51.645 [98/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:51.645 [99/745] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.645 [100/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:51.645 [101/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:51.645 [102/745] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.904 [103/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:51.904 [104/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:51.904 [105/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:51.904 [106/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:51.904 [107/745] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.904 [108/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:51.904 [109/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:51.904 [110/745] Generating lib/rte_cmdline_def with a custom command 00:01:51.904 [111/745] Linking static target lib/librte_telemetry.a 00:01:51.904 [112/745] Generating lib/rte_cmdline_mingw with a custom command 00:01:51.904 [113/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:51.904 [114/745] Generating lib/rte_metrics_def with a custom command 00:01:51.904 [115/745] Generating lib/rte_metrics_mingw with a custom command 00:01:51.904 [116/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:51.904 [117/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:51.904 [118/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:51.904 [119/745] Generating lib/rte_hash_def with a custom command 00:01:51.904 [120/745] Generating lib/rte_hash_mingw with a custom command 00:01:51.904 [121/745] Generating lib/rte_timer_def with a custom command 00:01:51.904 [122/745] Generating lib/rte_timer_mingw with a custom command 00:01:52.174 [123/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:52.174 [124/745] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:52.174 [125/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:52.174 [126/745] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:52.174 [127/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:52.174 [128/745] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:52.174 [129/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:52.174 [130/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:52.174 [131/745] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:52.174 [132/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:52.174 [133/745] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:52.174 [134/745] Generating lib/rte_acl_def with a custom command 00:01:52.174 [135/745] Generating lib/rte_acl_mingw with a custom command 00:01:52.435 [136/745] Generating lib/rte_bbdev_def with a custom command 00:01:52.435 [137/745] Generating lib/rte_bbdev_mingw with a custom command 00:01:52.435 [138/745] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.435 [139/745] Generating lib/rte_bitratestats_mingw with a custom command 00:01:52.435 [140/745] Generating lib/rte_bitratestats_def with a custom command 00:01:52.435 [141/745] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:52.435 [142/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:52.435 [143/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:52.435 [144/745] Linking target lib/librte_telemetry.so.23.0 00:01:52.435 [145/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:52.435 [146/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:52.435 [147/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:52.435 [148/745] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:52.435 [149/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:52.435 [150/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:52.435 [151/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:52.435 [152/745] Generating lib/rte_bpf_def with a custom command 00:01:52.435 [153/745] Generating lib/rte_bpf_mingw with a custom command 00:01:52.697 [154/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:52.697 [155/745] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:52.697 [156/745] Generating lib/rte_cfgfile_def with a custom command 00:01:52.697 [157/745] Generating lib/rte_cfgfile_mingw with a custom command 00:01:52.697 [158/745] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:01:52.697 [159/745] Generating lib/rte_compressdev_def with a custom command 00:01:52.697 [160/745] Generating lib/rte_compressdev_mingw with a custom command 00:01:52.697 [161/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:52.697 [162/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:52.697 [163/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:52.697 [164/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:52.697 [165/745] Generating lib/rte_cryptodev_def with a custom command 00:01:52.697 [166/745] Generating lib/rte_cryptodev_mingw with a custom command 00:01:52.697 [167/745] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:52.697 [168/745] Linking static target lib/librte_rcu.a 00:01:52.697 [169/745] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:52.697 [170/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:52.697 [171/745] Linking static target lib/librte_timer.a 00:01:52.697 [172/745] Generating lib/rte_distributor_def with a custom command 00:01:52.697 [173/745] Linking static target lib/librte_cmdline.a 00:01:52.697 [174/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:52.697 [175/745] Generating lib/rte_distributor_mingw with a custom command 00:01:52.956 [176/745] Generating lib/rte_efd_def with a custom command 00:01:52.956 [177/745] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:52.956 [178/745] Generating lib/rte_efd_mingw with a custom command 00:01:52.956 [179/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:52.956 [180/745] Linking static target lib/librte_net.a 00:01:52.956 [181/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:52.956 [182/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:52.956 [183/745] Linking static target lib/librte_mempool.a 00:01:52.956 [184/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:52.956 [185/745] Linking static target lib/librte_metrics.a 00:01:52.956 [186/745] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:53.219 [187/745] Linking static target lib/librte_cfgfile.a 00:01:53.219 [188/745] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.219 [189/745] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:53.219 [190/745] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.219 [191/745] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.483 [192/745] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:53.483 [193/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:53.483 [194/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:53.483 [195/745] Generating lib/rte_eventdev_mingw with a custom command 00:01:53.483 [196/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:53.483 [197/745] Generating lib/rte_eventdev_def with a custom command 00:01:53.483 [198/745] Linking static target lib/librte_eal.a 00:01:53.483 [199/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:53.483 [200/745] Generating lib/rte_gpudev_def with a custom command 00:01:53.483 [201/745] Generating lib/rte_gpudev_mingw with a custom command 00:01:53.483 [202/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:53.483 [203/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:53.483 [204/745] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:53.483 [205/745] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.483 [206/745] Linking static target lib/librte_bitratestats.a 00:01:53.743 [207/745] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:53.743 [208/745] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.743 [209/745] Generating lib/rte_gro_def with a custom command 00:01:53.743 [210/745] Generating lib/rte_gro_mingw with a custom command 00:01:53.743 [211/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:53.743 [212/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:53.743 [213/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:54.007 [214/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:54.007 [215/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:54.007 [216/745] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.007 [217/745] Generating lib/rte_gso_def with a custom command 00:01:54.007 [218/745] Generating lib/rte_gso_mingw with a custom command 00:01:54.007 [219/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:54.007 [220/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:54.007 [221/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:54.269 [222/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:54.269 [223/745] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:54.269 [224/745] Linking static target lib/librte_bbdev.a 00:01:54.269 [225/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:54.269 [226/745] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.269 [227/745] Generating lib/rte_ip_frag_def with a custom command 00:01:54.269 [228/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:54.269 [229/745] Generating lib/rte_ip_frag_mingw with a custom command 00:01:54.269 [230/745] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.269 [231/745] Generating lib/rte_jobstats_def with a custom command 00:01:54.269 [232/745] Generating lib/rte_jobstats_mingw with a custom command 00:01:54.269 [233/745] Generating lib/rte_latencystats_def with a custom command 00:01:54.269 [234/745] Generating lib/rte_latencystats_mingw with a custom command 00:01:54.269 [235/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:54.269 [236/745] Generating lib/rte_lpm_def with a custom command 00:01:54.269 [237/745] Generating lib/rte_lpm_mingw with a custom command 00:01:54.531 [238/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:54.531 [239/745] Linking static target lib/librte_compressdev.a 00:01:54.531 [240/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:54.531 [241/745] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:54.531 [242/745] Linking static target lib/librte_jobstats.a 00:01:54.531 [243/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:54.794 [244/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:54.794 [245/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:54.794 [246/745] Linking static target lib/librte_distributor.a 00:01:54.794 [247/745] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:54.794 [248/745] Generating lib/rte_member_def with a custom command 00:01:54.794 [249/745] Generating lib/rte_member_mingw with a custom command 00:01:54.794 [250/745] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:55.055 [251/745] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:55.055 [252/745] Generating lib/rte_pcapng_def with a custom command 00:01:55.055 [253/745] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.055 [254/745] Generating lib/rte_pcapng_mingw with a custom command 00:01:55.055 [255/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:55.055 [256/745] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.055 [257/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:55.055 [258/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:55.055 [259/745] Linking static target lib/librte_bpf.a 00:01:55.055 [260/745] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:55.055 [261/745] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:55.055 [262/745] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:55.314 [263/745] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.314 [264/745] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:55.314 [265/745] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:55.314 [266/745] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:55.314 [267/745] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:55.314 [268/745] Generating lib/rte_rawdev_def with a custom command 00:01:55.314 [269/745] Generating lib/rte_power_def with a custom command 00:01:55.314 [270/745] Generating lib/rte_power_mingw with a custom command 00:01:55.314 [271/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:55.314 [272/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:55.314 [273/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:55.314 [274/745] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:55.314 [275/745] Linking static target lib/librte_gro.a 00:01:55.314 [276/745] Linking static target lib/librte_gpudev.a 00:01:55.314 [277/745] Generating lib/rte_regexdev_def with a custom command 00:01:55.314 [278/745] Generating lib/rte_rawdev_mingw with a custom command 00:01:55.314 [279/745] Generating lib/rte_regexdev_mingw with a custom command 00:01:55.314 [280/745] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:55.314 [281/745] Generating lib/rte_dmadev_def with a custom command 00:01:55.314 [282/745] Generating lib/rte_dmadev_mingw with a custom command 00:01:55.581 [283/745] Generating lib/rte_rib_mingw with a custom command 00:01:55.581 [284/745] Generating lib/rte_rib_def with a custom command 00:01:55.581 [285/745] Generating lib/rte_reorder_def with a custom command 00:01:55.581 [286/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:55.581 [287/745] Generating lib/rte_reorder_mingw with a custom command 00:01:55.581 [288/745] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.581 [289/745] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:01:55.581 [290/745] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.843 [291/745] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:55.843 [292/745] Generating lib/rte_sched_def with a custom command 00:01:55.843 [293/745] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:55.843 [294/745] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:55.843 [295/745] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:55.843 [296/745] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:55.844 [297/745] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:55.844 [298/745] Generating lib/rte_security_def with a custom command 00:01:55.844 [299/745] Linking static target lib/librte_latencystats.a 00:01:55.844 [300/745] Generating lib/rte_sched_mingw with a custom command 00:01:55.844 [301/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:55.844 [302/745] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:55.844 [303/745] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:55.844 [304/745] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.844 [305/745] Generating lib/rte_security_mingw with a custom command 00:01:55.844 [306/745] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:55.844 [307/745] Generating lib/rte_stack_mingw with a custom command 00:01:55.844 [308/745] Generating lib/rte_stack_def with a custom command 00:01:55.844 [309/745] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:55.844 [310/745] Linking static target lib/librte_rawdev.a 00:01:55.844 [311/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:55.844 [312/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:55.844 [313/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:56.112 [314/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:56.112 [315/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:56.112 [316/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:56.112 [317/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:56.112 [318/745] Linking static target lib/librte_stack.a 00:01:56.112 [319/745] Generating lib/rte_vhost_def with a custom command 00:01:56.112 [320/745] Generating lib/rte_vhost_mingw with a custom command 00:01:56.112 [321/745] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:56.112 [322/745] Linking static target lib/librte_dmadev.a 00:01:56.112 [323/745] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:56.112 [324/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:56.112 [325/745] Linking static target lib/librte_ip_frag.a 00:01:56.112 [326/745] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.381 [327/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:56.381 [328/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:56.381 [329/745] Generating lib/rte_ipsec_def with a custom command 00:01:56.381 [330/745] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.381 [331/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:56.381 [332/745] Generating lib/rte_ipsec_mingw with a custom command 00:01:56.645 [333/745] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:01:56.645 [334/745] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.645 [335/745] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.645 [336/745] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.645 [337/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:56.645 [338/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:56.645 [339/745] Generating lib/rte_fib_def with a custom command 00:01:56.645 [340/745] Generating lib/rte_fib_mingw with a custom command 00:01:56.645 [341/745] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:56.906 [342/745] Linking static target lib/librte_gso.a 00:01:56.906 [343/745] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:56.906 [344/745] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:56.906 [345/745] Linking static target lib/librte_regexdev.a 00:01:56.906 [346/745] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.166 [347/745] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:57.166 [348/745] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.166 [349/745] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:57.166 [350/745] Linking static target lib/librte_efd.a 00:01:57.166 [351/745] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:57.166 [352/745] Linking static target lib/librte_pcapng.a 00:01:57.166 [353/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:57.166 [354/745] Linking static target lib/librte_lpm.a 00:01:57.425 [355/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:57.425 [356/745] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:57.425 [357/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:57.425 [358/745] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:57.425 [359/745] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:57.425 [360/745] Linking static target lib/librte_reorder.a 00:01:57.425 [361/745] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:57.425 [362/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:57.425 [363/745] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.425 [364/745] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:57.425 [365/745] Generating lib/rte_port_def with a custom command 00:01:57.689 [366/745] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:57.689 [367/745] Generating lib/rte_port_mingw with a custom command 00:01:57.689 [368/745] Linking static target lib/acl/libavx2_tmp.a 00:01:57.689 [369/745] Generating lib/rte_pdump_def with a custom command 00:01:57.689 [370/745] Generating lib/rte_pdump_mingw with a custom command 00:01:57.689 [371/745] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:57.689 [372/745] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:57.689 [373/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:57.689 [374/745] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:57.689 [375/745] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:57.689 [376/745] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.689 [377/745] Linking static target lib/librte_security.a 00:01:57.689 [378/745] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:57.689 [379/745] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:57.689 [380/745] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:57.689 [381/745] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:57.689 [382/745] Linking static target lib/librte_power.a 00:01:57.950 [383/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:57.950 [384/745] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.950 [385/745] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.950 [386/745] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:57.950 [387/745] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:57.950 [388/745] Linking static target lib/librte_hash.a 00:01:57.950 [389/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:57.950 [390/745] Linking static target lib/librte_rib.a 00:01:57.950 [391/745] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.216 [392/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:58.216 [393/745] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:58.216 [394/745] Linking static target lib/acl/libavx512_tmp.a 00:01:58.216 [395/745] Linking static target lib/librte_acl.a 00:01:58.216 [396/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:58.216 [397/745] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:58.216 [398/745] Generating lib/rte_table_def with a custom command 00:01:58.478 [399/745] Generating lib/rte_table_mingw with a custom command 00:01:58.478 [400/745] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.478 [401/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:58.478 [402/745] Linking static target lib/librte_ethdev.a 00:01:58.740 [403/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:58.740 [404/745] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.740 [405/745] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.740 [406/745] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:59.001 [407/745] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:59.001 [408/745] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:59.001 [409/745] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:59.001 [410/745] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:59.001 [411/745] Generating lib/rte_pipeline_def with a custom command 00:01:59.001 [412/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:59.001 [413/745] Generating lib/rte_pipeline_mingw with a custom command 00:01:59.001 [414/745] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.001 [415/745] Linking static target lib/librte_mbuf.a 00:01:59.001 [416/745] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:59.001 [417/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:59.001 [418/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:59.001 [419/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:59.001 [420/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:59.001 [421/745] Generating lib/rte_graph_mingw with a custom command 00:01:59.001 [422/745] Generating lib/rte_graph_def with a custom command 00:01:59.001 [423/745] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:59.001 [424/745] Linking static target lib/librte_fib.a 00:01:59.260 [425/745] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:59.260 [426/745] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.260 [427/745] Linking static target lib/librte_member.a 00:01:59.260 [428/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:59.260 [429/745] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:59.260 [430/745] Linking static target lib/librte_eventdev.a 00:01:59.260 [431/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:59.523 [432/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:59.523 [433/745] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:59.523 [434/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:59.523 [435/745] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:59.523 [436/745] Generating lib/rte_node_def with a custom command 00:01:59.523 [437/745] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:59.523 [438/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:59.523 [439/745] Generating lib/rte_node_mingw with a custom command 00:01:59.523 [440/745] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.523 [441/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:59.785 [442/745] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:59.785 [443/745] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:59.785 [444/745] Generating drivers/rte_bus_pci_def with a custom command 00:01:59.785 [445/745] Linking static target lib/librte_sched.a 00:01:59.785 [446/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:59.785 [447/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:59.785 [448/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:59.785 [449/745] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.785 [450/745] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.785 [451/745] Generating drivers/rte_bus_pci_mingw with a custom command 00:01:59.785 [452/745] Generating drivers/rte_bus_vdev_def with a custom command 00:01:59.785 [453/745] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:00.047 [454/745] Generating drivers/rte_mempool_ring_def with a custom command 00:02:00.047 [455/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:00.047 [456/745] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:00.047 [457/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:00.047 [458/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:00.047 [459/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:00.047 [460/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:00.047 [461/745] Linking static target lib/librte_cryptodev.a 00:02:00.047 [462/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:00.047 [463/745] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:00.047 [464/745] Linking static target lib/librte_pdump.a 00:02:00.047 [465/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:00.306 [466/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:00.306 [467/745] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:00.306 [468/745] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:00.306 [469/745] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:00.306 [470/745] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:00.306 [471/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:00.306 [472/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:00.306 [473/745] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:00.306 [474/745] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:00.306 [475/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:00.306 [476/745] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:00.569 [477/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:00.569 [478/745] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.569 [479/745] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:00.569 [480/745] Generating drivers/rte_net_i40e_def with a custom command 00:02:00.569 [481/745] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:00.569 [482/745] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:00.569 [483/745] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.831 [484/745] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:00.831 [485/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:00.831 [486/745] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:00.831 [487/745] Linking static target drivers/librte_bus_vdev.a 00:02:00.831 [488/745] Linking static target lib/librte_table.a 00:02:00.831 [489/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:00.831 [490/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:00.831 [491/745] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:00.831 [492/745] Linking static target lib/librte_ipsec.a 00:02:00.831 [493/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:01.092 [494/745] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:01.092 [495/745] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.092 [496/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:01.092 [497/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:01.359 [498/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:01.359 [499/745] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:01.359 [500/745] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:01.359 [501/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:01.359 [502/745] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:01.359 [503/745] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:01.359 [504/745] Linking static target drivers/librte_bus_pci.a 00:02:01.359 [505/745] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:01.359 [506/745] Linking static target lib/librte_graph.a 00:02:01.359 [507/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:01.359 [508/745] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.642 [509/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:01.642 [510/745] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:01.642 [511/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:01.642 [512/745] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:01.909 [513/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:01.909 [514/745] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.909 [515/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:01.909 [516/745] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.168 [517/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:02.168 [518/745] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:02.430 [519/745] Linking static target lib/librte_port.a 00:02:02.430 [520/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:02.430 [521/745] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.430 [522/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:02.430 [523/745] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:02.430 [524/745] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:02.689 [525/745] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:02.689 [526/745] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:02.689 [527/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:02.953 [528/745] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.953 [529/745] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:02.953 [530/745] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:02.953 [531/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:02.953 [532/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:02.953 [533/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:02.953 [534/745] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:02.953 [535/745] Linking static target drivers/librte_mempool_ring.a 00:02:02.953 [536/745] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:03.229 [537/745] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:03.229 [538/745] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.229 [539/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:03.229 [540/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:03.505 [541/745] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.505 [542/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:03.764 [543/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:03.765 [544/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:03.765 [545/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:03.765 [546/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:03.765 [547/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:04.028 [548/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:04.028 [549/745] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:04.028 [550/745] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:04.028 [551/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:04.289 [552/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:04.289 [553/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:04.552 [554/745] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:04.552 [555/745] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:04.552 [556/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:04.552 [557/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:04.819 [558/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:04.819 [559/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:05.084 [560/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:05.084 [561/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:05.084 [562/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:05.344 [563/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:05.344 [564/745] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:05.344 [565/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:05.344 [566/745] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:05.344 [567/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:05.344 [568/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:05.344 [569/745] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:05.344 [570/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:05.344 [571/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:05.615 [572/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:05.615 [573/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:05.882 [574/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:05.882 [575/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:05.882 [576/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:05.882 [577/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:05.882 [578/745] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.882 [579/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:05.882 [580/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:06.148 [581/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:06.148 [582/745] Linking target lib/librte_eal.so.23.0 00:02:06.148 [583/745] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:06.148 [584/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:06.148 [585/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:06.414 [586/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:06.414 [587/745] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:06.414 [588/745] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.414 [589/745] Linking target lib/librte_ring.so.23.0 00:02:06.414 [590/745] Linking target lib/librte_meter.so.23.0 00:02:06.414 [591/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:06.678 [592/745] Linking target lib/librte_pci.so.23.0 00:02:06.678 [593/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:06.678 [594/745] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:06.678 [595/745] Linking target lib/librte_rcu.so.23.0 00:02:06.678 [596/745] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:06.678 [597/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:06.678 [598/745] Linking target lib/librte_timer.so.23.0 00:02:06.940 [599/745] Linking target lib/librte_mempool.so.23.0 00:02:06.940 [600/745] Linking target lib/librte_acl.so.23.0 00:02:06.940 [601/745] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:06.940 [602/745] Linking target lib/librte_cfgfile.so.23.0 00:02:06.940 [603/745] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:06.940 [604/745] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:06.940 [605/745] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:06.940 [606/745] Linking target lib/librte_jobstats.so.23.0 00:02:06.940 [607/745] Linking target lib/librte_rawdev.so.23.0 00:02:06.940 [608/745] Linking target lib/librte_dmadev.so.23.0 00:02:07.201 [609/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:07.201 [610/745] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:07.201 [611/745] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:07.201 [612/745] Linking target lib/librte_stack.so.23.0 00:02:07.201 [613/745] Linking target lib/librte_graph.so.23.0 00:02:07.201 [614/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:07.201 [615/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:07.201 [616/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:07.201 [617/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:07.201 [618/745] Linking target drivers/librte_bus_pci.so.23.0 00:02:07.201 [619/745] Linking target drivers/librte_bus_vdev.so.23.0 00:02:07.201 [620/745] Linking target lib/librte_mbuf.so.23.0 00:02:07.201 [621/745] Linking target lib/librte_rib.so.23.0 00:02:07.201 [622/745] Linking target drivers/librte_mempool_ring.so.23.0 00:02:07.201 [623/745] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:07.201 [624/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:07.201 [625/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:07.201 [626/745] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:07.201 [627/745] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:07.201 [628/745] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:07.201 [629/745] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:07.460 [630/745] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:07.460 [631/745] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:07.460 [632/745] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:07.460 [633/745] Linking target lib/librte_fib.so.23.0 00:02:07.460 [634/745] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:07.460 [635/745] Linking target lib/librte_bbdev.so.23.0 00:02:07.460 [636/745] Linking target lib/librte_compressdev.so.23.0 00:02:07.460 [637/745] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:07.460 [638/745] Linking target lib/librte_distributor.so.23.0 00:02:07.460 [639/745] Linking target lib/librte_reorder.so.23.0 00:02:07.460 [640/745] Linking target lib/librte_gpudev.so.23.0 00:02:07.460 [641/745] Linking target lib/librte_net.so.23.0 00:02:07.460 [642/745] Linking target lib/librte_sched.so.23.0 00:02:07.460 [643/745] Linking target lib/librte_regexdev.so.23.0 00:02:07.460 [644/745] Linking target lib/librte_cryptodev.so.23.0 00:02:07.460 [645/745] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:07.460 [646/745] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:07.719 [647/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:07.719 [648/745] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:07.719 [649/745] Linking target lib/librte_cmdline.so.23.0 00:02:07.719 [650/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:07.719 [651/745] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:07.719 [652/745] Linking target lib/librte_hash.so.23.0 00:02:07.719 [653/745] Linking target lib/librte_ethdev.so.23.0 00:02:07.719 [654/745] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:07.719 [655/745] Linking target lib/librte_security.so.23.0 00:02:07.719 [656/745] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:07.719 [657/745] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:07.719 [658/745] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:07.719 [659/745] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:07.719 [660/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:07.719 [661/745] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:07.977 [662/745] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:07.977 [663/745] Linking target lib/librte_pcapng.so.23.0 00:02:07.977 [664/745] Linking target lib/librte_gso.so.23.0 00:02:07.977 [665/745] Linking target lib/librte_gro.so.23.0 00:02:07.977 [666/745] Linking target lib/librte_bpf.so.23.0 00:02:07.977 [667/745] Linking target lib/librte_metrics.so.23.0 00:02:07.977 [668/745] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:07.977 [669/745] Linking target lib/librte_power.so.23.0 00:02:07.977 [670/745] Linking target lib/librte_lpm.so.23.0 00:02:07.977 [671/745] Linking target lib/librte_efd.so.23.0 00:02:07.977 [672/745] Linking target lib/librte_ip_frag.so.23.0 00:02:07.977 [673/745] Linking target lib/librte_member.so.23.0 00:02:07.977 [674/745] Linking target lib/librte_ipsec.so.23.0 00:02:07.977 [675/745] Linking target lib/librte_eventdev.so.23.0 00:02:07.977 [676/745] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:07.977 [677/745] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:07.977 [678/745] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:07.977 [679/745] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:07.977 [680/745] Linking target lib/librte_latencystats.so.23.0 00:02:07.977 [681/745] Linking target lib/librte_pdump.so.23.0 00:02:08.236 [682/745] Linking target lib/librte_bitratestats.so.23.0 00:02:08.236 [683/745] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:08.236 [684/745] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:08.236 [685/745] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:08.236 [686/745] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:08.236 [687/745] Linking target lib/librte_port.so.23.0 00:02:08.236 [688/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:08.236 [689/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:08.236 [690/745] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:08.236 [691/745] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:08.495 [692/745] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:08.495 [693/745] Linking target lib/librte_table.so.23.0 00:02:08.495 [694/745] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:08.753 [695/745] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:08.753 [696/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:08.753 [697/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:09.011 [698/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:09.011 [699/745] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:09.269 [700/745] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:09.526 [701/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:09.526 [702/745] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:09.526 [703/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:09.783 [704/745] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:09.783 [705/745] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:09.783 [706/745] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:09.784 [707/745] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:10.041 [708/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:10.041 [709/745] Linking static target drivers/librte_net_i40e.a 00:02:10.298 [710/745] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:10.556 [711/745] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.556 [712/745] Linking target drivers/librte_net_i40e.so.23.0 00:02:11.121 [713/745] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:11.121 [714/745] Linking static target lib/librte_node.a 00:02:11.121 [715/745] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.379 [716/745] Linking target lib/librte_node.so.23.0 00:02:11.379 [717/745] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:12.315 [718/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:12.573 [719/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:20.685 [720/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:59.392 [721/745] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:59.392 [722/745] Linking static target lib/librte_vhost.a 00:02:59.392 [723/745] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.392 [724/745] Linking target lib/librte_vhost.so.23.0 00:03:05.953 [725/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:05.953 [726/745] Linking static target lib/librte_pipeline.a 00:03:06.211 [727/745] Linking target app/dpdk-proc-info 00:03:06.211 [728/745] Linking target app/dpdk-test-fib 00:03:06.211 [729/745] Linking target app/dpdk-pdump 00:03:06.211 [730/745] Linking target app/dpdk-test-cmdline 00:03:06.211 [731/745] Linking target app/dpdk-test-sad 00:03:06.211 [732/745] Linking target app/dpdk-test-acl 00:03:06.211 [733/745] Linking target app/dpdk-test-gpudev 00:03:06.211 [734/745] Linking target app/dpdk-dumpcap 00:03:06.211 [735/745] Linking target app/dpdk-test-flow-perf 00:03:06.211 [736/745] Linking target app/dpdk-test-regex 00:03:06.211 [737/745] Linking target app/dpdk-test-pipeline 00:03:06.211 [738/745] Linking target app/dpdk-test-crypto-perf 00:03:06.211 [739/745] Linking target app/dpdk-test-bbdev 00:03:06.211 [740/745] Linking target app/dpdk-test-security-perf 00:03:06.211 [741/745] Linking target app/dpdk-test-eventdev 00:03:06.211 [742/745] Linking target app/dpdk-test-compress-perf 00:03:06.211 [743/745] Linking target app/dpdk-testpmd 00:03:08.112 [744/745] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.112 [745/745] Linking target lib/librte_pipeline.so.23.0 00:03:08.112 09:12:52 build_native_dpdk -- common/autobuild_common.sh@188 -- $ uname -s 00:03:08.112 09:12:52 build_native_dpdk -- common/autobuild_common.sh@188 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:08.112 09:12:52 build_native_dpdk -- common/autobuild_common.sh@201 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:03:08.112 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:03:08.112 [0/1] Installing files. 00:03:08.375 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:03:08.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:08.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:08.379 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:08.380 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:08.380 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.380 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.380 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.381 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.959 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.959 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.959 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.959 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.959 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.959 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.959 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.959 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.959 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.959 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:08.959 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.959 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:08.959 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.959 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:08.959 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.959 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:08.959 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:08.960 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:08.960 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:08.960 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:08.960 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:08.960 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:08.960 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:08.960 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:08.960 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:08.960 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:08.960 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:08.960 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:08.960 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:08.960 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:08.960 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:08.960 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:08.960 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:08.960 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.960 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.960 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.960 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:08.960 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:08.960 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:08.960 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:08.960 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:08.960 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:08.960 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:08.960 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:08.960 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:08.960 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:08.961 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:08.961 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:08.961 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.961 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.961 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.961 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.961 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.961 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.961 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.961 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.961 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.961 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.961 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.961 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.961 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.961 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.961 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.961 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.961 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.961 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.962 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.962 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.962 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.962 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.962 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.962 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.962 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.962 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.962 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.962 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.962 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.962 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.962 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.962 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.962 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.962 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.962 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.962 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.962 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.962 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.962 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.962 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.962 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.962 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.963 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.963 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.963 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.963 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.963 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.963 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.963 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.963 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.963 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.963 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.963 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.963 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.963 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.963 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.963 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.963 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.964 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.964 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.964 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.964 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.964 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.964 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.964 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.964 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.964 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.964 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.964 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.964 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.964 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.964 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.964 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.964 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.964 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.964 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.964 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.964 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.964 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.964 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.964 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.964 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.964 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.964 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.964 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.964 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.964 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.964 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.964 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.964 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.964 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.965 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.966 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.967 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.968 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.968 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.968 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.968 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.968 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.968 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.968 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.968 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.968 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.968 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.968 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.968 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.968 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.968 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.968 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.968 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.968 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.968 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.968 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.968 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.968 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.968 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.968 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.968 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.968 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.968 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.968 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.969 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.969 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.969 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.969 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.969 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.969 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.969 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.969 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.969 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.969 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.969 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.969 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.969 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.969 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.969 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.969 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.969 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.969 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.969 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.969 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.969 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.969 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.969 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.969 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.969 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.969 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.970 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.970 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.970 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.970 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.970 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.970 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.970 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.970 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.970 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.970 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.970 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.970 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.970 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.970 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.970 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.970 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.971 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:08.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:08.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:08.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:08.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:08.972 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:08.972 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:03:08.972 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:03:08.972 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:03:08.972 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:03:08.972 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:03:08.972 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:03:08.972 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:03:08.972 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:03:08.972 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:03:08.972 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:03:08.972 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:03:08.972 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:03:08.972 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:03:08.972 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:03:08.972 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.23 00:03:08.972 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:03:08.972 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:03:08.972 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:03:08.972 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:03:08.972 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:03:08.972 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:03:08.972 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:03:08.972 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:03:08.972 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:03:08.972 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:03:08.972 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:03:08.972 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:03:08.972 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:03:08.972 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:03:08.972 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:03:08.972 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:03:08.972 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:03:08.972 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:03:08.972 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:03:08.972 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:03:08.972 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:03:08.972 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:03:08.972 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:03:08.972 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:03:08.972 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:03:08.972 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:03:08.972 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:03:08.972 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:03:08.972 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:03:08.972 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:03:08.972 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:03:08.972 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:03:08.972 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:03:08.972 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:03:08.972 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:03:08.972 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:03:08.972 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:03:08.972 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:03:08.972 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:03:08.972 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:03:08.972 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:03:08.972 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:03:08.972 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:03:08.972 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:03:08.972 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:03:08.972 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:03:08.972 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:03:08.972 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:03:08.972 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:03:08.972 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.23 00:03:08.972 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:03:08.972 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:03:08.972 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:03:08.972 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.23 00:03:08.972 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:03:08.972 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:03:08.972 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:03:08.972 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:03:08.972 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:03:08.972 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:03:08.972 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:03:08.972 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:03:08.972 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:03:08.972 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:03:08.972 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:03:08.972 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:03:08.972 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:03:08.972 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.23 00:03:08.972 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:03:08.972 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:03:08.972 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:03:08.972 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:03:08.972 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:03:08.973 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:03:08.973 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:03:08.973 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:03:08.973 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:03:08.973 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.23 00:03:08.973 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:03:08.973 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:03:08.973 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:03:08.973 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.23 00:03:08.973 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:03:08.973 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:03:08.973 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:03:08.973 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:03:08.973 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:03:08.973 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.23 00:03:08.973 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:03:08.973 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:08.973 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:08.973 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:08.973 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:08.973 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:08.973 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:08.973 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:08.973 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:08.973 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:08.973 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:08.973 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:08.973 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:08.973 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:08.973 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:08.973 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:08.973 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:08.973 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:08.973 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:08.973 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:08.973 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:08.973 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:08.973 09:12:53 build_native_dpdk -- common/autobuild_common.sh@207 -- $ cat 00:03:08.973 09:12:53 build_native_dpdk -- common/autobuild_common.sh@212 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:08.973 00:03:08.973 real 1m23.620s 00:03:08.973 user 14m27.677s 00:03:08.973 sys 1m47.619s 00:03:08.973 09:12:53 build_native_dpdk -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:08.973 09:12:53 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:08.973 ************************************ 00:03:08.973 END TEST build_native_dpdk 00:03:08.973 ************************************ 00:03:08.973 09:12:53 -- common/autotest_common.sh@1142 -- $ return 0 00:03:08.973 09:12:53 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:08.973 09:12:53 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:08.973 09:12:53 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:08.973 09:12:53 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:08.973 09:12:53 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:08.973 09:12:53 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:08.973 09:12:53 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:08.973 09:12:53 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:03:08.973 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:03:09.231 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.231 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:09.231 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:09.489 Using 'verbs' RDMA provider 00:03:20.052 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:28.155 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:28.412 Creating mk/config.mk...done. 00:03:28.412 Creating mk/cc.flags.mk...done. 00:03:28.412 Type 'make' to build. 00:03:28.412 09:13:12 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:03:28.412 09:13:12 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:03:28.412 09:13:12 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:28.412 09:13:12 -- common/autotest_common.sh@10 -- $ set +x 00:03:28.412 ************************************ 00:03:28.412 START TEST make 00:03:28.412 ************************************ 00:03:28.412 09:13:12 make -- common/autotest_common.sh@1123 -- $ make -j48 00:03:28.670 make[1]: Nothing to be done for 'all'. 00:03:30.052 The Meson build system 00:03:30.052 Version: 1.3.1 00:03:30.052 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:30.052 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:30.052 Build type: native build 00:03:30.052 Project name: libvfio-user 00:03:30.052 Project version: 0.0.1 00:03:30.052 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:30.052 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:30.052 Host machine cpu family: x86_64 00:03:30.052 Host machine cpu: x86_64 00:03:30.052 Run-time dependency threads found: YES 00:03:30.052 Library dl found: YES 00:03:30.052 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:30.052 Run-time dependency json-c found: YES 0.17 00:03:30.052 Run-time dependency cmocka found: YES 1.1.7 00:03:30.052 Program pytest-3 found: NO 00:03:30.052 Program flake8 found: NO 00:03:30.052 Program misspell-fixer found: NO 00:03:30.052 Program restructuredtext-lint found: NO 00:03:30.052 Program valgrind found: YES (/usr/bin/valgrind) 00:03:30.052 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:30.052 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:30.052 Compiler for C supports arguments -Wwrite-strings: YES 00:03:30.052 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:30.052 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:30.052 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:30.052 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:30.052 Build targets in project: 8 00:03:30.052 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:30.052 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:30.052 00:03:30.052 libvfio-user 0.0.1 00:03:30.052 00:03:30.052 User defined options 00:03:30.052 buildtype : debug 00:03:30.052 default_library: shared 00:03:30.052 libdir : /usr/local/lib 00:03:30.052 00:03:30.052 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:30.996 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:30.996 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:30.996 [2/37] Compiling C object samples/null.p/null.c.o 00:03:30.996 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:30.996 [4/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:30.996 [5/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:30.996 [6/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:31.254 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:31.254 [8/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:31.254 [9/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:31.254 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:31.254 [11/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:31.254 [12/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:31.254 [13/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:31.254 [14/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:31.254 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:31.254 [16/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:31.254 [17/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:31.254 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:31.254 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:31.254 [20/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:31.254 [21/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:31.254 [22/37] Compiling C object samples/server.p/server.c.o 00:03:31.254 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:31.254 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:31.254 [25/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:31.254 [26/37] Compiling C object samples/client.p/client.c.o 00:03:31.254 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:31.513 [28/37] Linking target samples/client 00:03:31.513 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:03:31.513 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:31.513 [31/37] Linking target test/unit_tests 00:03:31.778 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:31.778 [33/37] Linking target samples/gpio-pci-idio-16 00:03:31.778 [34/37] Linking target samples/null 00:03:31.778 [35/37] Linking target samples/server 00:03:31.778 [36/37] Linking target samples/lspci 00:03:31.778 [37/37] Linking target samples/shadow_ioeventfd_server 00:03:31.778 INFO: autodetecting backend as ninja 00:03:31.778 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:31.778 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:32.727 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:32.727 ninja: no work to do. 00:03:44.923 CC lib/ut_mock/mock.o 00:03:44.923 CC lib/ut/ut.o 00:03:44.923 CC lib/log/log.o 00:03:44.923 CC lib/log/log_flags.o 00:03:44.923 CC lib/log/log_deprecated.o 00:03:44.923 LIB libspdk_ut.a 00:03:44.923 LIB libspdk_ut_mock.a 00:03:44.923 LIB libspdk_log.a 00:03:44.923 SO libspdk_ut_mock.so.6.0 00:03:44.923 SO libspdk_ut.so.2.0 00:03:44.923 SO libspdk_log.so.7.0 00:03:44.923 SYMLINK libspdk_ut_mock.so 00:03:44.923 SYMLINK libspdk_ut.so 00:03:44.923 SYMLINK libspdk_log.so 00:03:44.923 CC lib/dma/dma.o 00:03:44.923 CXX lib/trace_parser/trace.o 00:03:44.923 CC lib/ioat/ioat.o 00:03:44.923 CC lib/util/base64.o 00:03:44.923 CC lib/util/bit_array.o 00:03:44.923 CC lib/util/cpuset.o 00:03:44.923 CC lib/util/crc16.o 00:03:44.923 CC lib/util/crc32.o 00:03:44.923 CC lib/util/crc32c.o 00:03:44.923 CC lib/util/crc32_ieee.o 00:03:44.923 CC lib/util/crc64.o 00:03:44.923 CC lib/util/dif.o 00:03:44.923 CC lib/util/fd.o 00:03:44.923 CC lib/util/file.o 00:03:44.923 CC lib/util/hexlify.o 00:03:44.923 CC lib/util/iov.o 00:03:44.923 CC lib/util/math.o 00:03:44.923 CC lib/util/pipe.o 00:03:44.923 CC lib/util/strerror_tls.o 00:03:44.923 CC lib/util/string.o 00:03:44.923 CC lib/util/uuid.o 00:03:44.923 CC lib/util/fd_group.o 00:03:44.923 CC lib/util/xor.o 00:03:44.923 CC lib/util/zipf.o 00:03:44.924 CC lib/vfio_user/host/vfio_user_pci.o 00:03:44.924 CC lib/vfio_user/host/vfio_user.o 00:03:44.924 LIB libspdk_dma.a 00:03:44.924 SO libspdk_dma.so.4.0 00:03:44.924 SYMLINK libspdk_dma.so 00:03:44.924 LIB libspdk_ioat.a 00:03:44.924 SO libspdk_ioat.so.7.0 00:03:44.924 SYMLINK libspdk_ioat.so 00:03:44.924 LIB libspdk_vfio_user.a 00:03:44.924 SO libspdk_vfio_user.so.5.0 00:03:44.924 SYMLINK libspdk_vfio_user.so 00:03:44.924 LIB libspdk_util.a 00:03:44.924 SO libspdk_util.so.9.1 00:03:44.924 SYMLINK libspdk_util.so 00:03:45.181 CC lib/json/json_parse.o 00:03:45.181 CC lib/env_dpdk/env.o 00:03:45.181 CC lib/conf/conf.o 00:03:45.181 CC lib/rdma_provider/common.o 00:03:45.181 CC lib/json/json_util.o 00:03:45.181 CC lib/vmd/vmd.o 00:03:45.181 CC lib/env_dpdk/memory.o 00:03:45.182 CC lib/rdma_utils/rdma_utils.o 00:03:45.182 CC lib/json/json_write.o 00:03:45.182 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:45.182 CC lib/vmd/led.o 00:03:45.182 CC lib/env_dpdk/pci.o 00:03:45.182 CC lib/idxd/idxd.o 00:03:45.182 CC lib/env_dpdk/init.o 00:03:45.182 CC lib/idxd/idxd_user.o 00:03:45.182 CC lib/env_dpdk/threads.o 00:03:45.182 CC lib/env_dpdk/pci_ioat.o 00:03:45.182 CC lib/idxd/idxd_kernel.o 00:03:45.182 CC lib/env_dpdk/pci_virtio.o 00:03:45.182 CC lib/env_dpdk/pci_vmd.o 00:03:45.182 CC lib/env_dpdk/pci_idxd.o 00:03:45.182 CC lib/env_dpdk/pci_event.o 00:03:45.182 CC lib/env_dpdk/sigbus_handler.o 00:03:45.182 CC lib/env_dpdk/pci_dpdk.o 00:03:45.182 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:45.182 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:45.182 LIB libspdk_trace_parser.a 00:03:45.182 SO libspdk_trace_parser.so.5.0 00:03:45.439 LIB libspdk_rdma_provider.a 00:03:45.439 SYMLINK libspdk_trace_parser.so 00:03:45.439 SO libspdk_rdma_provider.so.6.0 00:03:45.439 LIB libspdk_conf.a 00:03:45.439 SO libspdk_conf.so.6.0 00:03:45.439 SYMLINK libspdk_rdma_provider.so 00:03:45.439 LIB libspdk_rdma_utils.a 00:03:45.439 LIB libspdk_json.a 00:03:45.439 SO libspdk_rdma_utils.so.1.0 00:03:45.439 SYMLINK libspdk_conf.so 00:03:45.439 SO libspdk_json.so.6.0 00:03:45.697 SYMLINK libspdk_rdma_utils.so 00:03:45.697 SYMLINK libspdk_json.so 00:03:45.697 CC lib/jsonrpc/jsonrpc_server.o 00:03:45.697 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:45.697 CC lib/jsonrpc/jsonrpc_client.o 00:03:45.697 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:45.697 LIB libspdk_idxd.a 00:03:45.697 SO libspdk_idxd.so.12.0 00:03:46.022 SYMLINK libspdk_idxd.so 00:03:46.022 LIB libspdk_vmd.a 00:03:46.022 SO libspdk_vmd.so.6.0 00:03:46.022 SYMLINK libspdk_vmd.so 00:03:46.022 LIB libspdk_jsonrpc.a 00:03:46.022 SO libspdk_jsonrpc.so.6.0 00:03:46.301 SYMLINK libspdk_jsonrpc.so 00:03:46.301 CC lib/rpc/rpc.o 00:03:46.558 LIB libspdk_rpc.a 00:03:46.558 SO libspdk_rpc.so.6.0 00:03:46.558 SYMLINK libspdk_rpc.so 00:03:46.817 CC lib/trace/trace.o 00:03:46.817 CC lib/keyring/keyring.o 00:03:46.817 CC lib/notify/notify.o 00:03:46.817 CC lib/trace/trace_flags.o 00:03:46.817 CC lib/keyring/keyring_rpc.o 00:03:46.817 CC lib/notify/notify_rpc.o 00:03:46.817 CC lib/trace/trace_rpc.o 00:03:46.817 LIB libspdk_notify.a 00:03:46.817 SO libspdk_notify.so.6.0 00:03:47.075 LIB libspdk_keyring.a 00:03:47.075 SYMLINK libspdk_notify.so 00:03:47.075 LIB libspdk_trace.a 00:03:47.075 SO libspdk_keyring.so.1.0 00:03:47.075 SO libspdk_trace.so.10.0 00:03:47.075 SYMLINK libspdk_keyring.so 00:03:47.075 SYMLINK libspdk_trace.so 00:03:47.075 LIB libspdk_env_dpdk.a 00:03:47.334 SO libspdk_env_dpdk.so.14.1 00:03:47.334 CC lib/sock/sock.o 00:03:47.334 CC lib/sock/sock_rpc.o 00:03:47.334 CC lib/thread/thread.o 00:03:47.334 CC lib/thread/iobuf.o 00:03:47.334 SYMLINK libspdk_env_dpdk.so 00:03:47.593 LIB libspdk_sock.a 00:03:47.593 SO libspdk_sock.so.10.0 00:03:47.851 SYMLINK libspdk_sock.so 00:03:47.851 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:47.851 CC lib/nvme/nvme_ctrlr.o 00:03:47.851 CC lib/nvme/nvme_fabric.o 00:03:47.851 CC lib/nvme/nvme_ns_cmd.o 00:03:47.851 CC lib/nvme/nvme_ns.o 00:03:47.851 CC lib/nvme/nvme_pcie_common.o 00:03:47.851 CC lib/nvme/nvme_pcie.o 00:03:47.851 CC lib/nvme/nvme_qpair.o 00:03:47.851 CC lib/nvme/nvme.o 00:03:47.851 CC lib/nvme/nvme_quirks.o 00:03:47.851 CC lib/nvme/nvme_transport.o 00:03:47.851 CC lib/nvme/nvme_discovery.o 00:03:47.851 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:47.851 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:47.851 CC lib/nvme/nvme_tcp.o 00:03:47.851 CC lib/nvme/nvme_opal.o 00:03:47.851 CC lib/nvme/nvme_io_msg.o 00:03:47.851 CC lib/nvme/nvme_poll_group.o 00:03:47.851 CC lib/nvme/nvme_zns.o 00:03:47.851 CC lib/nvme/nvme_stubs.o 00:03:47.851 CC lib/nvme/nvme_auth.o 00:03:47.851 CC lib/nvme/nvme_cuse.o 00:03:47.851 CC lib/nvme/nvme_vfio_user.o 00:03:47.851 CC lib/nvme/nvme_rdma.o 00:03:48.786 LIB libspdk_thread.a 00:03:48.786 SO libspdk_thread.so.10.1 00:03:49.044 SYMLINK libspdk_thread.so 00:03:49.044 CC lib/init/json_config.o 00:03:49.044 CC lib/accel/accel.o 00:03:49.044 CC lib/init/subsystem.o 00:03:49.044 CC lib/virtio/virtio.o 00:03:49.044 CC lib/init/subsystem_rpc.o 00:03:49.044 CC lib/accel/accel_rpc.o 00:03:49.044 CC lib/virtio/virtio_vhost_user.o 00:03:49.044 CC lib/init/rpc.o 00:03:49.044 CC lib/virtio/virtio_vfio_user.o 00:03:49.044 CC lib/accel/accel_sw.o 00:03:49.044 CC lib/virtio/virtio_pci.o 00:03:49.044 CC lib/vfu_tgt/tgt_endpoint.o 00:03:49.044 CC lib/blob/blobstore.o 00:03:49.044 CC lib/vfu_tgt/tgt_rpc.o 00:03:49.044 CC lib/blob/request.o 00:03:49.044 CC lib/blob/zeroes.o 00:03:49.044 CC lib/blob/blob_bs_dev.o 00:03:49.303 LIB libspdk_init.a 00:03:49.303 SO libspdk_init.so.5.0 00:03:49.561 LIB libspdk_virtio.a 00:03:49.561 LIB libspdk_vfu_tgt.a 00:03:49.561 SYMLINK libspdk_init.so 00:03:49.561 SO libspdk_vfu_tgt.so.3.0 00:03:49.561 SO libspdk_virtio.so.7.0 00:03:49.561 SYMLINK libspdk_vfu_tgt.so 00:03:49.561 SYMLINK libspdk_virtio.so 00:03:49.561 CC lib/event/app.o 00:03:49.561 CC lib/event/reactor.o 00:03:49.561 CC lib/event/log_rpc.o 00:03:49.561 CC lib/event/app_rpc.o 00:03:49.561 CC lib/event/scheduler_static.o 00:03:50.126 LIB libspdk_event.a 00:03:50.126 SO libspdk_event.so.14.0 00:03:50.126 LIB libspdk_accel.a 00:03:50.126 SYMLINK libspdk_event.so 00:03:50.126 SO libspdk_accel.so.15.1 00:03:50.126 SYMLINK libspdk_accel.so 00:03:50.384 LIB libspdk_nvme.a 00:03:50.384 CC lib/bdev/bdev.o 00:03:50.384 CC lib/bdev/bdev_rpc.o 00:03:50.384 CC lib/bdev/bdev_zone.o 00:03:50.384 CC lib/bdev/part.o 00:03:50.384 CC lib/bdev/scsi_nvme.o 00:03:50.384 SO libspdk_nvme.so.13.1 00:03:50.642 SYMLINK libspdk_nvme.so 00:03:52.016 LIB libspdk_blob.a 00:03:52.016 SO libspdk_blob.so.11.0 00:03:52.274 SYMLINK libspdk_blob.so 00:03:52.274 CC lib/blobfs/blobfs.o 00:03:52.274 CC lib/blobfs/tree.o 00:03:52.274 CC lib/lvol/lvol.o 00:03:52.839 LIB libspdk_bdev.a 00:03:52.839 SO libspdk_bdev.so.15.1 00:03:53.103 SYMLINK libspdk_bdev.so 00:03:53.103 LIB libspdk_blobfs.a 00:03:53.103 CC lib/ublk/ublk.o 00:03:53.103 CC lib/scsi/dev.o 00:03:53.103 CC lib/scsi/lun.o 00:03:53.103 CC lib/ublk/ublk_rpc.o 00:03:53.103 CC lib/scsi/port.o 00:03:53.103 CC lib/ftl/ftl_core.o 00:03:53.103 CC lib/nbd/nbd.o 00:03:53.103 CC lib/nvmf/ctrlr.o 00:03:53.103 CC lib/scsi/scsi.o 00:03:53.103 CC lib/nbd/nbd_rpc.o 00:03:53.103 CC lib/ftl/ftl_init.o 00:03:53.103 CC lib/nvmf/ctrlr_discovery.o 00:03:53.103 CC lib/scsi/scsi_bdev.o 00:03:53.103 CC lib/ftl/ftl_layout.o 00:03:53.103 CC lib/nvmf/ctrlr_bdev.o 00:03:53.103 CC lib/ftl/ftl_debug.o 00:03:53.103 CC lib/scsi/scsi_pr.o 00:03:53.103 CC lib/nvmf/subsystem.o 00:03:53.103 CC lib/nvmf/nvmf.o 00:03:53.103 CC lib/scsi/scsi_rpc.o 00:03:53.103 CC lib/nvmf/nvmf_rpc.o 00:03:53.103 CC lib/ftl/ftl_io.o 00:03:53.103 CC lib/scsi/task.o 00:03:53.103 CC lib/nvmf/transport.o 00:03:53.103 CC lib/ftl/ftl_sb.o 00:03:53.103 CC lib/nvmf/tcp.o 00:03:53.103 CC lib/ftl/ftl_l2p.o 00:03:53.103 CC lib/ftl/ftl_l2p_flat.o 00:03:53.103 CC lib/ftl/ftl_nv_cache.o 00:03:53.103 CC lib/nvmf/stubs.o 00:03:53.103 CC lib/nvmf/mdns_server.o 00:03:53.103 CC lib/ftl/ftl_band.o 00:03:53.103 CC lib/nvmf/vfio_user.o 00:03:53.103 CC lib/ftl/ftl_band_ops.o 00:03:53.103 CC lib/nvmf/rdma.o 00:03:53.103 CC lib/ftl/ftl_writer.o 00:03:53.103 CC lib/nvmf/auth.o 00:03:53.103 CC lib/ftl/ftl_rq.o 00:03:53.103 CC lib/ftl/ftl_reloc.o 00:03:53.103 CC lib/ftl/ftl_l2p_cache.o 00:03:53.103 CC lib/ftl/ftl_p2l.o 00:03:53.103 CC lib/ftl/mngt/ftl_mngt.o 00:03:53.103 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:53.103 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:53.104 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:53.104 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:53.104 SO libspdk_blobfs.so.10.0 00:03:53.365 SYMLINK libspdk_blobfs.so 00:03:53.365 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:53.365 LIB libspdk_lvol.a 00:03:53.365 SO libspdk_lvol.so.10.0 00:03:53.626 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:53.626 SYMLINK libspdk_lvol.so 00:03:53.626 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:53.626 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:53.626 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:53.626 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:53.626 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:53.626 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:53.626 CC lib/ftl/utils/ftl_conf.o 00:03:53.626 CC lib/ftl/utils/ftl_md.o 00:03:53.626 CC lib/ftl/utils/ftl_mempool.o 00:03:53.626 CC lib/ftl/utils/ftl_bitmap.o 00:03:53.626 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:53.626 CC lib/ftl/utils/ftl_property.o 00:03:53.626 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:53.626 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:53.626 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:53.626 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:53.626 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:53.626 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:53.886 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:53.886 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:53.886 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:53.886 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:53.886 CC lib/ftl/base/ftl_base_dev.o 00:03:53.886 CC lib/ftl/base/ftl_base_bdev.o 00:03:53.886 CC lib/ftl/ftl_trace.o 00:03:53.886 LIB libspdk_nbd.a 00:03:53.886 SO libspdk_nbd.so.7.0 00:03:54.144 SYMLINK libspdk_nbd.so 00:03:54.144 LIB libspdk_scsi.a 00:03:54.144 SO libspdk_scsi.so.9.0 00:03:54.144 LIB libspdk_ublk.a 00:03:54.144 SYMLINK libspdk_scsi.so 00:03:54.401 SO libspdk_ublk.so.3.0 00:03:54.401 SYMLINK libspdk_ublk.so 00:03:54.401 CC lib/vhost/vhost.o 00:03:54.401 CC lib/iscsi/conn.o 00:03:54.401 CC lib/vhost/vhost_rpc.o 00:03:54.401 CC lib/iscsi/init_grp.o 00:03:54.401 CC lib/vhost/vhost_scsi.o 00:03:54.401 CC lib/iscsi/iscsi.o 00:03:54.401 CC lib/vhost/vhost_blk.o 00:03:54.401 CC lib/iscsi/md5.o 00:03:54.401 CC lib/iscsi/param.o 00:03:54.401 CC lib/vhost/rte_vhost_user.o 00:03:54.401 CC lib/iscsi/portal_grp.o 00:03:54.401 CC lib/iscsi/tgt_node.o 00:03:54.401 CC lib/iscsi/iscsi_subsystem.o 00:03:54.401 CC lib/iscsi/iscsi_rpc.o 00:03:54.401 CC lib/iscsi/task.o 00:03:54.658 LIB libspdk_ftl.a 00:03:54.916 SO libspdk_ftl.so.9.0 00:03:55.175 SYMLINK libspdk_ftl.so 00:03:55.741 LIB libspdk_vhost.a 00:03:55.741 SO libspdk_vhost.so.8.0 00:03:55.741 SYMLINK libspdk_vhost.so 00:03:55.741 LIB libspdk_nvmf.a 00:03:55.741 LIB libspdk_iscsi.a 00:03:56.001 SO libspdk_nvmf.so.18.1 00:03:56.001 SO libspdk_iscsi.so.8.0 00:03:56.001 SYMLINK libspdk_iscsi.so 00:03:56.001 SYMLINK libspdk_nvmf.so 00:03:56.260 CC module/env_dpdk/env_dpdk_rpc.o 00:03:56.260 CC module/vfu_device/vfu_virtio.o 00:03:56.260 CC module/vfu_device/vfu_virtio_blk.o 00:03:56.260 CC module/vfu_device/vfu_virtio_scsi.o 00:03:56.260 CC module/vfu_device/vfu_virtio_rpc.o 00:03:56.519 CC module/accel/error/accel_error.o 00:03:56.519 CC module/scheduler/gscheduler/gscheduler.o 00:03:56.519 CC module/accel/error/accel_error_rpc.o 00:03:56.519 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:56.519 CC module/blob/bdev/blob_bdev.o 00:03:56.519 CC module/keyring/linux/keyring.o 00:03:56.519 CC module/accel/ioat/accel_ioat.o 00:03:56.519 CC module/keyring/linux/keyring_rpc.o 00:03:56.519 CC module/accel/ioat/accel_ioat_rpc.o 00:03:56.519 CC module/sock/posix/posix.o 00:03:56.519 CC module/accel/iaa/accel_iaa.o 00:03:56.519 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:56.519 CC module/accel/iaa/accel_iaa_rpc.o 00:03:56.519 CC module/accel/dsa/accel_dsa.o 00:03:56.519 CC module/accel/dsa/accel_dsa_rpc.o 00:03:56.519 CC module/keyring/file/keyring.o 00:03:56.519 CC module/keyring/file/keyring_rpc.o 00:03:56.519 LIB libspdk_env_dpdk_rpc.a 00:03:56.519 SO libspdk_env_dpdk_rpc.so.6.0 00:03:56.519 SYMLINK libspdk_env_dpdk_rpc.so 00:03:56.519 LIB libspdk_keyring_linux.a 00:03:56.519 LIB libspdk_scheduler_gscheduler.a 00:03:56.519 LIB libspdk_keyring_file.a 00:03:56.519 LIB libspdk_scheduler_dpdk_governor.a 00:03:56.519 SO libspdk_keyring_linux.so.1.0 00:03:56.519 SO libspdk_scheduler_gscheduler.so.4.0 00:03:56.519 SO libspdk_keyring_file.so.1.0 00:03:56.519 LIB libspdk_accel_error.a 00:03:56.777 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:56.777 LIB libspdk_accel_ioat.a 00:03:56.777 LIB libspdk_scheduler_dynamic.a 00:03:56.777 LIB libspdk_accel_iaa.a 00:03:56.777 SO libspdk_accel_error.so.2.0 00:03:56.777 SYMLINK libspdk_scheduler_gscheduler.so 00:03:56.777 SYMLINK libspdk_keyring_linux.so 00:03:56.777 SO libspdk_accel_ioat.so.6.0 00:03:56.777 SO libspdk_scheduler_dynamic.so.4.0 00:03:56.777 SYMLINK libspdk_keyring_file.so 00:03:56.777 SO libspdk_accel_iaa.so.3.0 00:03:56.777 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:56.777 LIB libspdk_accel_dsa.a 00:03:56.777 SYMLINK libspdk_accel_error.so 00:03:56.777 LIB libspdk_blob_bdev.a 00:03:56.777 SYMLINK libspdk_accel_ioat.so 00:03:56.777 SYMLINK libspdk_scheduler_dynamic.so 00:03:56.777 SYMLINK libspdk_accel_iaa.so 00:03:56.777 SO libspdk_blob_bdev.so.11.0 00:03:56.777 SO libspdk_accel_dsa.so.5.0 00:03:56.777 SYMLINK libspdk_blob_bdev.so 00:03:56.777 SYMLINK libspdk_accel_dsa.so 00:03:57.034 LIB libspdk_vfu_device.a 00:03:57.034 SO libspdk_vfu_device.so.3.0 00:03:57.034 CC module/bdev/malloc/bdev_malloc.o 00:03:57.034 CC module/bdev/null/bdev_null.o 00:03:57.034 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:57.034 CC module/bdev/raid/bdev_raid.o 00:03:57.034 CC module/bdev/null/bdev_null_rpc.o 00:03:57.034 CC module/bdev/error/vbdev_error.o 00:03:57.034 CC module/bdev/raid/bdev_raid_rpc.o 00:03:57.034 CC module/bdev/nvme/bdev_nvme.o 00:03:57.034 CC module/bdev/lvol/vbdev_lvol.o 00:03:57.034 CC module/blobfs/bdev/blobfs_bdev.o 00:03:57.034 CC module/bdev/error/vbdev_error_rpc.o 00:03:57.034 CC module/bdev/delay/vbdev_delay.o 00:03:57.034 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:57.034 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:57.034 CC module/bdev/raid/bdev_raid_sb.o 00:03:57.034 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:57.034 CC module/bdev/gpt/gpt.o 00:03:57.034 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:57.034 CC module/bdev/passthru/vbdev_passthru.o 00:03:57.034 CC module/bdev/raid/raid0.o 00:03:57.034 CC module/bdev/gpt/vbdev_gpt.o 00:03:57.034 CC module/bdev/nvme/nvme_rpc.o 00:03:57.034 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:57.034 CC module/bdev/raid/raid1.o 00:03:57.034 CC module/bdev/raid/concat.o 00:03:57.034 CC module/bdev/nvme/bdev_mdns_client.o 00:03:57.034 CC module/bdev/iscsi/bdev_iscsi.o 00:03:57.034 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:57.034 CC module/bdev/nvme/vbdev_opal.o 00:03:57.034 CC module/bdev/aio/bdev_aio.o 00:03:57.034 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:57.034 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:57.034 CC module/bdev/aio/bdev_aio_rpc.o 00:03:57.034 CC module/bdev/ftl/bdev_ftl.o 00:03:57.034 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:57.034 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:57.034 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:57.034 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:57.034 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:57.034 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:57.034 CC module/bdev/split/vbdev_split.o 00:03:57.034 CC module/bdev/split/vbdev_split_rpc.o 00:03:57.292 SYMLINK libspdk_vfu_device.so 00:03:57.292 LIB libspdk_sock_posix.a 00:03:57.549 LIB libspdk_blobfs_bdev.a 00:03:57.549 SO libspdk_sock_posix.so.6.0 00:03:57.549 LIB libspdk_bdev_ftl.a 00:03:57.549 SO libspdk_blobfs_bdev.so.6.0 00:03:57.549 SO libspdk_bdev_ftl.so.6.0 00:03:57.549 SYMLINK libspdk_sock_posix.so 00:03:57.549 LIB libspdk_bdev_split.a 00:03:57.549 LIB libspdk_bdev_null.a 00:03:57.549 SYMLINK libspdk_bdev_ftl.so 00:03:57.549 LIB libspdk_bdev_gpt.a 00:03:57.549 SO libspdk_bdev_split.so.6.0 00:03:57.549 SYMLINK libspdk_blobfs_bdev.so 00:03:57.549 LIB libspdk_bdev_error.a 00:03:57.549 SO libspdk_bdev_null.so.6.0 00:03:57.549 SO libspdk_bdev_gpt.so.6.0 00:03:57.549 LIB libspdk_bdev_passthru.a 00:03:57.549 LIB libspdk_bdev_aio.a 00:03:57.549 SO libspdk_bdev_error.so.6.0 00:03:57.549 SO libspdk_bdev_passthru.so.6.0 00:03:57.549 SYMLINK libspdk_bdev_split.so 00:03:57.549 LIB libspdk_bdev_zone_block.a 00:03:57.549 SO libspdk_bdev_aio.so.6.0 00:03:57.549 SYMLINK libspdk_bdev_null.so 00:03:57.549 SYMLINK libspdk_bdev_gpt.so 00:03:57.549 SO libspdk_bdev_zone_block.so.6.0 00:03:57.549 LIB libspdk_bdev_malloc.a 00:03:57.549 SYMLINK libspdk_bdev_error.so 00:03:57.549 LIB libspdk_bdev_delay.a 00:03:57.549 LIB libspdk_bdev_iscsi.a 00:03:57.549 LIB libspdk_bdev_lvol.a 00:03:57.807 SYMLINK libspdk_bdev_passthru.so 00:03:57.807 SO libspdk_bdev_malloc.so.6.0 00:03:57.807 SO libspdk_bdev_delay.so.6.0 00:03:57.807 SO libspdk_bdev_iscsi.so.6.0 00:03:57.807 SYMLINK libspdk_bdev_aio.so 00:03:57.807 SO libspdk_bdev_lvol.so.6.0 00:03:57.807 SYMLINK libspdk_bdev_zone_block.so 00:03:57.807 SYMLINK libspdk_bdev_delay.so 00:03:57.807 SYMLINK libspdk_bdev_malloc.so 00:03:57.807 SYMLINK libspdk_bdev_iscsi.so 00:03:57.807 SYMLINK libspdk_bdev_lvol.so 00:03:57.807 LIB libspdk_bdev_virtio.a 00:03:57.807 SO libspdk_bdev_virtio.so.6.0 00:03:58.065 SYMLINK libspdk_bdev_virtio.so 00:03:58.323 LIB libspdk_bdev_raid.a 00:03:58.323 SO libspdk_bdev_raid.so.6.0 00:03:58.323 SYMLINK libspdk_bdev_raid.so 00:03:59.735 LIB libspdk_bdev_nvme.a 00:03:59.735 SO libspdk_bdev_nvme.so.7.0 00:03:59.992 SYMLINK libspdk_bdev_nvme.so 00:04:00.251 CC module/event/subsystems/sock/sock.o 00:04:00.251 CC module/event/subsystems/iobuf/iobuf.o 00:04:00.251 CC module/event/subsystems/scheduler/scheduler.o 00:04:00.251 CC module/event/subsystems/keyring/keyring.o 00:04:00.251 CC module/event/subsystems/vmd/vmd.o 00:04:00.251 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:00.251 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:00.251 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:00.251 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:00.509 LIB libspdk_event_keyring.a 00:04:00.509 LIB libspdk_event_vhost_blk.a 00:04:00.509 LIB libspdk_event_scheduler.a 00:04:00.509 LIB libspdk_event_sock.a 00:04:00.509 LIB libspdk_event_vfu_tgt.a 00:04:00.509 LIB libspdk_event_vmd.a 00:04:00.509 SO libspdk_event_keyring.so.1.0 00:04:00.509 SO libspdk_event_vhost_blk.so.3.0 00:04:00.509 LIB libspdk_event_iobuf.a 00:04:00.509 SO libspdk_event_sock.so.5.0 00:04:00.509 SO libspdk_event_scheduler.so.4.0 00:04:00.509 SO libspdk_event_vfu_tgt.so.3.0 00:04:00.509 SO libspdk_event_vmd.so.6.0 00:04:00.509 SO libspdk_event_iobuf.so.3.0 00:04:00.509 SYMLINK libspdk_event_keyring.so 00:04:00.509 SYMLINK libspdk_event_vhost_blk.so 00:04:00.509 SYMLINK libspdk_event_sock.so 00:04:00.509 SYMLINK libspdk_event_scheduler.so 00:04:00.509 SYMLINK libspdk_event_vfu_tgt.so 00:04:00.509 SYMLINK libspdk_event_vmd.so 00:04:00.509 SYMLINK libspdk_event_iobuf.so 00:04:00.768 CC module/event/subsystems/accel/accel.o 00:04:00.768 LIB libspdk_event_accel.a 00:04:00.768 SO libspdk_event_accel.so.6.0 00:04:00.768 SYMLINK libspdk_event_accel.so 00:04:01.026 CC module/event/subsystems/bdev/bdev.o 00:04:01.284 LIB libspdk_event_bdev.a 00:04:01.284 SO libspdk_event_bdev.so.6.0 00:04:01.284 SYMLINK libspdk_event_bdev.so 00:04:01.543 CC module/event/subsystems/ublk/ublk.o 00:04:01.543 CC module/event/subsystems/nbd/nbd.o 00:04:01.543 CC module/event/subsystems/scsi/scsi.o 00:04:01.543 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:01.543 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:01.543 LIB libspdk_event_ublk.a 00:04:01.543 LIB libspdk_event_nbd.a 00:04:01.543 LIB libspdk_event_scsi.a 00:04:01.543 SO libspdk_event_ublk.so.3.0 00:04:01.543 SO libspdk_event_nbd.so.6.0 00:04:01.543 SO libspdk_event_scsi.so.6.0 00:04:01.801 SYMLINK libspdk_event_ublk.so 00:04:01.801 SYMLINK libspdk_event_nbd.so 00:04:01.801 SYMLINK libspdk_event_scsi.so 00:04:01.801 LIB libspdk_event_nvmf.a 00:04:01.801 SO libspdk_event_nvmf.so.6.0 00:04:01.801 SYMLINK libspdk_event_nvmf.so 00:04:01.801 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:01.801 CC module/event/subsystems/iscsi/iscsi.o 00:04:02.060 LIB libspdk_event_vhost_scsi.a 00:04:02.060 LIB libspdk_event_iscsi.a 00:04:02.060 SO libspdk_event_vhost_scsi.so.3.0 00:04:02.060 SO libspdk_event_iscsi.so.6.0 00:04:02.060 SYMLINK libspdk_event_vhost_scsi.so 00:04:02.060 SYMLINK libspdk_event_iscsi.so 00:04:02.319 SO libspdk.so.6.0 00:04:02.319 SYMLINK libspdk.so 00:04:02.319 CXX app/trace/trace.o 00:04:02.319 TEST_HEADER include/spdk/accel.h 00:04:02.319 CC app/trace_record/trace_record.o 00:04:02.319 TEST_HEADER include/spdk/accel_module.h 00:04:02.319 CC test/rpc_client/rpc_client_test.o 00:04:02.319 TEST_HEADER include/spdk/barrier.h 00:04:02.319 TEST_HEADER include/spdk/assert.h 00:04:02.319 CC app/spdk_top/spdk_top.o 00:04:02.319 TEST_HEADER include/spdk/base64.h 00:04:02.319 CC app/spdk_nvme_discover/discovery_aer.o 00:04:02.319 TEST_HEADER include/spdk/bdev.h 00:04:02.319 TEST_HEADER include/spdk/bdev_zone.h 00:04:02.319 TEST_HEADER include/spdk/bdev_module.h 00:04:02.319 CC app/spdk_nvme_identify/identify.o 00:04:02.319 TEST_HEADER include/spdk/bit_array.h 00:04:02.319 CC app/spdk_lspci/spdk_lspci.o 00:04:02.319 TEST_HEADER include/spdk/bit_pool.h 00:04:02.319 TEST_HEADER include/spdk/blob_bdev.h 00:04:02.319 CC app/spdk_nvme_perf/perf.o 00:04:02.319 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:02.319 TEST_HEADER include/spdk/blobfs.h 00:04:02.319 TEST_HEADER include/spdk/blob.h 00:04:02.319 TEST_HEADER include/spdk/conf.h 00:04:02.319 TEST_HEADER include/spdk/config.h 00:04:02.319 TEST_HEADER include/spdk/cpuset.h 00:04:02.319 TEST_HEADER include/spdk/crc16.h 00:04:02.319 TEST_HEADER include/spdk/crc32.h 00:04:02.319 TEST_HEADER include/spdk/crc64.h 00:04:02.319 TEST_HEADER include/spdk/dif.h 00:04:02.319 TEST_HEADER include/spdk/endian.h 00:04:02.319 TEST_HEADER include/spdk/dma.h 00:04:02.319 TEST_HEADER include/spdk/env.h 00:04:02.319 TEST_HEADER include/spdk/env_dpdk.h 00:04:02.319 TEST_HEADER include/spdk/event.h 00:04:02.319 TEST_HEADER include/spdk/fd_group.h 00:04:02.319 TEST_HEADER include/spdk/fd.h 00:04:02.319 TEST_HEADER include/spdk/file.h 00:04:02.319 TEST_HEADER include/spdk/gpt_spec.h 00:04:02.319 TEST_HEADER include/spdk/ftl.h 00:04:02.319 TEST_HEADER include/spdk/hexlify.h 00:04:02.319 TEST_HEADER include/spdk/idxd.h 00:04:02.319 TEST_HEADER include/spdk/histogram_data.h 00:04:02.319 TEST_HEADER include/spdk/idxd_spec.h 00:04:02.319 TEST_HEADER include/spdk/init.h 00:04:02.319 TEST_HEADER include/spdk/ioat.h 00:04:02.319 TEST_HEADER include/spdk/ioat_spec.h 00:04:02.319 TEST_HEADER include/spdk/iscsi_spec.h 00:04:02.319 TEST_HEADER include/spdk/json.h 00:04:02.583 TEST_HEADER include/spdk/jsonrpc.h 00:04:02.583 TEST_HEADER include/spdk/keyring.h 00:04:02.583 TEST_HEADER include/spdk/keyring_module.h 00:04:02.583 TEST_HEADER include/spdk/likely.h 00:04:02.583 TEST_HEADER include/spdk/lvol.h 00:04:02.583 TEST_HEADER include/spdk/log.h 00:04:02.583 TEST_HEADER include/spdk/memory.h 00:04:02.583 TEST_HEADER include/spdk/nbd.h 00:04:02.583 TEST_HEADER include/spdk/mmio.h 00:04:02.583 TEST_HEADER include/spdk/notify.h 00:04:02.583 TEST_HEADER include/spdk/nvme.h 00:04:02.583 TEST_HEADER include/spdk/nvme_intel.h 00:04:02.583 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:02.583 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:02.583 TEST_HEADER include/spdk/nvme_spec.h 00:04:02.583 TEST_HEADER include/spdk/nvme_zns.h 00:04:02.583 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:02.583 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:02.583 TEST_HEADER include/spdk/nvmf.h 00:04:02.583 TEST_HEADER include/spdk/nvmf_spec.h 00:04:02.583 TEST_HEADER include/spdk/nvmf_transport.h 00:04:02.583 TEST_HEADER include/spdk/opal.h 00:04:02.583 TEST_HEADER include/spdk/opal_spec.h 00:04:02.583 TEST_HEADER include/spdk/pci_ids.h 00:04:02.583 TEST_HEADER include/spdk/pipe.h 00:04:02.583 TEST_HEADER include/spdk/queue.h 00:04:02.583 TEST_HEADER include/spdk/reduce.h 00:04:02.583 TEST_HEADER include/spdk/rpc.h 00:04:02.583 TEST_HEADER include/spdk/scheduler.h 00:04:02.583 TEST_HEADER include/spdk/scsi.h 00:04:02.583 TEST_HEADER include/spdk/scsi_spec.h 00:04:02.583 TEST_HEADER include/spdk/sock.h 00:04:02.583 TEST_HEADER include/spdk/stdinc.h 00:04:02.583 TEST_HEADER include/spdk/string.h 00:04:02.583 TEST_HEADER include/spdk/thread.h 00:04:02.583 TEST_HEADER include/spdk/trace.h 00:04:02.583 TEST_HEADER include/spdk/trace_parser.h 00:04:02.583 TEST_HEADER include/spdk/tree.h 00:04:02.583 TEST_HEADER include/spdk/ublk.h 00:04:02.583 TEST_HEADER include/spdk/util.h 00:04:02.583 TEST_HEADER include/spdk/uuid.h 00:04:02.583 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:02.583 TEST_HEADER include/spdk/version.h 00:04:02.583 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:02.583 TEST_HEADER include/spdk/vhost.h 00:04:02.583 TEST_HEADER include/spdk/vmd.h 00:04:02.583 TEST_HEADER include/spdk/xor.h 00:04:02.583 TEST_HEADER include/spdk/zipf.h 00:04:02.583 CXX test/cpp_headers/accel.o 00:04:02.583 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:02.583 CXX test/cpp_headers/accel_module.o 00:04:02.583 CXX test/cpp_headers/assert.o 00:04:02.583 CXX test/cpp_headers/barrier.o 00:04:02.583 CXX test/cpp_headers/base64.o 00:04:02.583 CXX test/cpp_headers/bdev.o 00:04:02.583 CXX test/cpp_headers/bdev_module.o 00:04:02.583 CXX test/cpp_headers/bdev_zone.o 00:04:02.583 CXX test/cpp_headers/bit_array.o 00:04:02.583 CXX test/cpp_headers/bit_pool.o 00:04:02.583 CC app/spdk_dd/spdk_dd.o 00:04:02.583 CXX test/cpp_headers/blob_bdev.o 00:04:02.583 CXX test/cpp_headers/blobfs_bdev.o 00:04:02.583 CXX test/cpp_headers/blobfs.o 00:04:02.583 CXX test/cpp_headers/blob.o 00:04:02.583 CXX test/cpp_headers/conf.o 00:04:02.583 CXX test/cpp_headers/config.o 00:04:02.583 CXX test/cpp_headers/cpuset.o 00:04:02.583 CXX test/cpp_headers/crc16.o 00:04:02.583 CC app/iscsi_tgt/iscsi_tgt.o 00:04:02.583 CC app/nvmf_tgt/nvmf_main.o 00:04:02.583 CXX test/cpp_headers/crc32.o 00:04:02.583 CC test/env/vtophys/vtophys.o 00:04:02.583 CC test/env/memory/memory_ut.o 00:04:02.583 CC app/spdk_tgt/spdk_tgt.o 00:04:02.583 CC test/env/pci/pci_ut.o 00:04:02.583 CC examples/util/zipf/zipf.o 00:04:02.583 CC test/app/jsoncat/jsoncat.o 00:04:02.583 CC examples/ioat/verify/verify.o 00:04:02.584 CC examples/ioat/perf/perf.o 00:04:02.584 CC test/thread/poller_perf/poller_perf.o 00:04:02.584 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:02.584 CC test/app/histogram_perf/histogram_perf.o 00:04:02.584 CC test/app/stub/stub.o 00:04:02.584 CC app/fio/nvme/fio_plugin.o 00:04:02.584 CC test/dma/test_dma/test_dma.o 00:04:02.584 CC test/app/bdev_svc/bdev_svc.o 00:04:02.584 CC app/fio/bdev/fio_plugin.o 00:04:02.846 CC test/env/mem_callbacks/mem_callbacks.o 00:04:02.846 LINK spdk_lspci 00:04:02.846 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:02.846 LINK rpc_client_test 00:04:02.846 LINK spdk_nvme_discover 00:04:02.846 LINK vtophys 00:04:02.846 LINK jsoncat 00:04:02.846 LINK interrupt_tgt 00:04:02.846 CXX test/cpp_headers/crc64.o 00:04:02.846 LINK histogram_perf 00:04:02.846 CXX test/cpp_headers/dif.o 00:04:02.846 CXX test/cpp_headers/dma.o 00:04:02.846 CXX test/cpp_headers/endian.o 00:04:02.846 CXX test/cpp_headers/env_dpdk.o 00:04:02.846 CXX test/cpp_headers/env.o 00:04:02.846 CXX test/cpp_headers/event.o 00:04:02.846 CXX test/cpp_headers/fd_group.o 00:04:02.846 LINK poller_perf 00:04:02.846 CXX test/cpp_headers/fd.o 00:04:03.111 CXX test/cpp_headers/file.o 00:04:03.111 LINK nvmf_tgt 00:04:03.111 LINK zipf 00:04:03.111 LINK env_dpdk_post_init 00:04:03.111 CXX test/cpp_headers/ftl.o 00:04:03.111 LINK stub 00:04:03.111 CXX test/cpp_headers/gpt_spec.o 00:04:03.111 LINK iscsi_tgt 00:04:03.111 CXX test/cpp_headers/hexlify.o 00:04:03.111 LINK spdk_trace_record 00:04:03.111 CXX test/cpp_headers/histogram_data.o 00:04:03.111 CXX test/cpp_headers/idxd.o 00:04:03.111 CXX test/cpp_headers/idxd_spec.o 00:04:03.111 LINK bdev_svc 00:04:03.111 LINK ioat_perf 00:04:03.111 LINK spdk_tgt 00:04:03.111 LINK verify 00:04:03.111 CXX test/cpp_headers/init.o 00:04:03.111 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:03.111 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:03.111 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:03.111 LINK mem_callbacks 00:04:03.111 CXX test/cpp_headers/ioat.o 00:04:03.111 CXX test/cpp_headers/ioat_spec.o 00:04:03.111 CXX test/cpp_headers/iscsi_spec.o 00:04:03.375 CXX test/cpp_headers/json.o 00:04:03.375 CXX test/cpp_headers/jsonrpc.o 00:04:03.375 LINK spdk_dd 00:04:03.375 CXX test/cpp_headers/keyring.o 00:04:03.375 CXX test/cpp_headers/keyring_module.o 00:04:03.375 LINK spdk_trace 00:04:03.375 CXX test/cpp_headers/likely.o 00:04:03.375 CXX test/cpp_headers/log.o 00:04:03.375 CXX test/cpp_headers/lvol.o 00:04:03.375 LINK pci_ut 00:04:03.375 CXX test/cpp_headers/memory.o 00:04:03.375 CXX test/cpp_headers/mmio.o 00:04:03.375 CXX test/cpp_headers/nbd.o 00:04:03.375 CXX test/cpp_headers/notify.o 00:04:03.375 CXX test/cpp_headers/nvme.o 00:04:03.375 CXX test/cpp_headers/nvme_intel.o 00:04:03.375 CXX test/cpp_headers/nvme_ocssd.o 00:04:03.375 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:03.375 CXX test/cpp_headers/nvme_spec.o 00:04:03.375 LINK test_dma 00:04:03.375 CXX test/cpp_headers/nvme_zns.o 00:04:03.375 CXX test/cpp_headers/nvmf_cmd.o 00:04:03.375 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:03.375 CXX test/cpp_headers/nvmf.o 00:04:03.375 CXX test/cpp_headers/nvmf_spec.o 00:04:03.375 CXX test/cpp_headers/nvmf_transport.o 00:04:03.375 CXX test/cpp_headers/opal.o 00:04:03.375 CXX test/cpp_headers/opal_spec.o 00:04:03.375 CXX test/cpp_headers/pci_ids.o 00:04:03.375 CXX test/cpp_headers/pipe.o 00:04:03.375 CXX test/cpp_headers/queue.o 00:04:03.639 CXX test/cpp_headers/reduce.o 00:04:03.639 CXX test/cpp_headers/rpc.o 00:04:03.639 CXX test/cpp_headers/scheduler.o 00:04:03.639 CXX test/cpp_headers/scsi.o 00:04:03.639 LINK nvme_fuzz 00:04:03.639 CC test/event/event_perf/event_perf.o 00:04:03.639 CC test/event/reactor/reactor.o 00:04:03.639 LINK spdk_bdev 00:04:03.639 CC examples/sock/hello_world/hello_sock.o 00:04:03.639 CC examples/vmd/lsvmd/lsvmd.o 00:04:03.639 LINK spdk_nvme 00:04:03.639 CC examples/idxd/perf/perf.o 00:04:03.639 CXX test/cpp_headers/scsi_spec.o 00:04:03.639 CC examples/thread/thread/thread_ex.o 00:04:03.639 CXX test/cpp_headers/sock.o 00:04:03.639 CXX test/cpp_headers/stdinc.o 00:04:03.639 CXX test/cpp_headers/string.o 00:04:03.900 CC test/event/reactor_perf/reactor_perf.o 00:04:03.900 CXX test/cpp_headers/thread.o 00:04:03.900 CXX test/cpp_headers/trace.o 00:04:03.900 CXX test/cpp_headers/trace_parser.o 00:04:03.900 CC examples/vmd/led/led.o 00:04:03.900 CC test/event/app_repeat/app_repeat.o 00:04:03.900 CXX test/cpp_headers/tree.o 00:04:03.900 CXX test/cpp_headers/ublk.o 00:04:03.900 CXX test/cpp_headers/util.o 00:04:03.900 CXX test/cpp_headers/uuid.o 00:04:03.900 CXX test/cpp_headers/version.o 00:04:03.900 CXX test/cpp_headers/vfio_user_pci.o 00:04:03.900 CXX test/cpp_headers/vfio_user_spec.o 00:04:03.900 CXX test/cpp_headers/vhost.o 00:04:03.900 CXX test/cpp_headers/vmd.o 00:04:03.900 CXX test/cpp_headers/xor.o 00:04:03.900 CXX test/cpp_headers/zipf.o 00:04:03.900 CC test/event/scheduler/scheduler.o 00:04:03.900 LINK vhost_fuzz 00:04:03.900 LINK event_perf 00:04:03.900 LINK reactor 00:04:03.900 CC app/vhost/vhost.o 00:04:03.900 LINK spdk_nvme_perf 00:04:03.900 LINK spdk_nvme_identify 00:04:03.900 LINK lsvmd 00:04:03.900 LINK memory_ut 00:04:04.158 LINK spdk_top 00:04:04.158 LINK reactor_perf 00:04:04.158 LINK hello_sock 00:04:04.158 LINK led 00:04:04.158 CC test/nvme/reserve/reserve.o 00:04:04.158 CC test/nvme/reset/reset.o 00:04:04.158 CC test/nvme/simple_copy/simple_copy.o 00:04:04.159 CC test/nvme/aer/aer.o 00:04:04.159 CC test/nvme/e2edp/nvme_dp.o 00:04:04.159 CC test/nvme/connect_stress/connect_stress.o 00:04:04.159 CC test/nvme/startup/startup.o 00:04:04.159 CC test/nvme/sgl/sgl.o 00:04:04.159 CC test/nvme/err_injection/err_injection.o 00:04:04.159 LINK app_repeat 00:04:04.159 CC test/nvme/overhead/overhead.o 00:04:04.159 CC test/nvme/compliance/nvme_compliance.o 00:04:04.159 CC test/nvme/boot_partition/boot_partition.o 00:04:04.159 CC test/accel/dif/dif.o 00:04:04.159 CC test/blobfs/mkfs/mkfs.o 00:04:04.159 CC test/nvme/fused_ordering/fused_ordering.o 00:04:04.159 LINK thread 00:04:04.159 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:04.159 CC test/nvme/cuse/cuse.o 00:04:04.159 CC test/nvme/fdp/fdp.o 00:04:04.416 CC test/lvol/esnap/esnap.o 00:04:04.416 LINK idxd_perf 00:04:04.416 LINK vhost 00:04:04.416 LINK scheduler 00:04:04.416 LINK err_injection 00:04:04.416 LINK boot_partition 00:04:04.416 LINK fused_ordering 00:04:04.416 LINK doorbell_aers 00:04:04.416 LINK reserve 00:04:04.416 LINK connect_stress 00:04:04.674 LINK startup 00:04:04.674 CC examples/nvme/reconnect/reconnect.o 00:04:04.674 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:04.674 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:04.674 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:04.674 LINK sgl 00:04:04.674 CC examples/nvme/arbitration/arbitration.o 00:04:04.674 CC examples/nvme/abort/abort.o 00:04:04.674 CC examples/nvme/hello_world/hello_world.o 00:04:04.674 CC examples/nvme/hotplug/hotplug.o 00:04:04.674 LINK mkfs 00:04:04.674 LINK overhead 00:04:04.674 LINK nvme_dp 00:04:04.674 LINK simple_copy 00:04:04.674 LINK reset 00:04:04.674 LINK aer 00:04:04.674 LINK fdp 00:04:04.674 CC examples/accel/perf/accel_perf.o 00:04:04.674 CC examples/blob/hello_world/hello_blob.o 00:04:04.674 CC examples/blob/cli/blobcli.o 00:04:04.674 LINK nvme_compliance 00:04:04.674 LINK dif 00:04:04.674 LINK pmr_persistence 00:04:04.931 LINK hello_world 00:04:04.931 LINK cmb_copy 00:04:04.931 LINK reconnect 00:04:04.931 LINK hotplug 00:04:04.931 LINK abort 00:04:04.931 LINK hello_blob 00:04:04.931 LINK arbitration 00:04:05.188 LINK accel_perf 00:04:05.188 CC test/bdev/bdevio/bdevio.o 00:04:05.188 LINK nvme_manage 00:04:05.188 LINK blobcli 00:04:05.446 CC examples/bdev/hello_world/hello_bdev.o 00:04:05.446 CC examples/bdev/bdevperf/bdevperf.o 00:04:05.704 LINK iscsi_fuzz 00:04:05.704 LINK bdevio 00:04:05.704 LINK hello_bdev 00:04:05.963 LINK cuse 00:04:06.221 LINK bdevperf 00:04:06.788 CC examples/nvmf/nvmf/nvmf.o 00:04:07.046 LINK nvmf 00:04:09.586 LINK esnap 00:04:09.586 00:04:09.586 real 0m41.324s 00:04:09.586 user 7m22.920s 00:04:09.586 sys 1m50.853s 00:04:09.586 09:13:53 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:04:09.586 09:13:53 make -- common/autotest_common.sh@10 -- $ set +x 00:04:09.586 ************************************ 00:04:09.586 END TEST make 00:04:09.586 ************************************ 00:04:09.586 09:13:54 -- common/autotest_common.sh@1142 -- $ return 0 00:04:09.586 09:13:54 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:09.586 09:13:54 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:09.586 09:13:54 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:09.586 09:13:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:09.586 09:13:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:09.586 09:13:54 -- pm/common@44 -- $ pid=500780 00:04:09.586 09:13:54 -- pm/common@50 -- $ kill -TERM 500780 00:04:09.586 09:13:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:09.586 09:13:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:09.586 09:13:54 -- pm/common@44 -- $ pid=500782 00:04:09.586 09:13:54 -- pm/common@50 -- $ kill -TERM 500782 00:04:09.586 09:13:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:09.586 09:13:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:09.586 09:13:54 -- pm/common@44 -- $ pid=500784 00:04:09.586 09:13:54 -- pm/common@50 -- $ kill -TERM 500784 00:04:09.586 09:13:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:09.586 09:13:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:09.586 09:13:54 -- pm/common@44 -- $ pid=500813 00:04:09.586 09:13:54 -- pm/common@50 -- $ sudo -E kill -TERM 500813 00:04:09.844 09:13:54 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:09.844 09:13:54 -- nvmf/common.sh@7 -- # uname -s 00:04:09.844 09:13:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:09.844 09:13:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:09.844 09:13:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:09.844 09:13:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:09.844 09:13:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:09.844 09:13:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:09.844 09:13:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:09.844 09:13:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:09.844 09:13:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:09.844 09:13:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:09.844 09:13:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:09.844 09:13:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:09.844 09:13:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:09.844 09:13:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:09.844 09:13:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:09.844 09:13:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:09.844 09:13:54 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:09.844 09:13:54 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:09.844 09:13:54 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:09.844 09:13:54 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:09.844 09:13:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.844 09:13:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.844 09:13:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.844 09:13:54 -- paths/export.sh@5 -- # export PATH 00:04:09.844 09:13:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.844 09:13:54 -- nvmf/common.sh@47 -- # : 0 00:04:09.844 09:13:54 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:09.844 09:13:54 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:09.844 09:13:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:09.844 09:13:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:09.844 09:13:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:09.844 09:13:54 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:09.844 09:13:54 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:09.844 09:13:54 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:09.844 09:13:54 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:09.844 09:13:54 -- spdk/autotest.sh@32 -- # uname -s 00:04:09.844 09:13:54 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:09.844 09:13:54 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:09.844 09:13:54 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:09.844 09:13:54 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:09.844 09:13:54 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:09.844 09:13:54 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:09.844 09:13:54 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:09.844 09:13:54 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:09.844 09:13:54 -- spdk/autotest.sh@48 -- # udevadm_pid=576685 00:04:09.844 09:13:54 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:09.844 09:13:54 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:09.844 09:13:54 -- pm/common@17 -- # local monitor 00:04:09.844 09:13:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:09.844 09:13:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:09.844 09:13:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:09.844 09:13:54 -- pm/common@21 -- # date +%s 00:04:09.844 09:13:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:09.844 09:13:54 -- pm/common@21 -- # date +%s 00:04:09.844 09:13:54 -- pm/common@25 -- # sleep 1 00:04:09.844 09:13:54 -- pm/common@21 -- # date +%s 00:04:09.844 09:13:54 -- pm/common@21 -- # date +%s 00:04:09.844 09:13:54 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720941234 00:04:09.844 09:13:54 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720941234 00:04:09.844 09:13:54 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720941234 00:04:09.844 09:13:54 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720941234 00:04:09.844 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720941234_collect-vmstat.pm.log 00:04:09.844 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720941234_collect-cpu-load.pm.log 00:04:09.844 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720941234_collect-cpu-temp.pm.log 00:04:09.844 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720941234_collect-bmc-pm.bmc.pm.log 00:04:10.778 09:13:55 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:10.778 09:13:55 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:10.778 09:13:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:10.778 09:13:55 -- common/autotest_common.sh@10 -- # set +x 00:04:10.778 09:13:55 -- spdk/autotest.sh@59 -- # create_test_list 00:04:10.778 09:13:55 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:10.778 09:13:55 -- common/autotest_common.sh@10 -- # set +x 00:04:10.778 09:13:55 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:10.778 09:13:55 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:10.778 09:13:55 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:10.778 09:13:55 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:10.778 09:13:55 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:10.778 09:13:55 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:10.778 09:13:55 -- common/autotest_common.sh@1455 -- # uname 00:04:10.778 09:13:55 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:10.778 09:13:55 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:10.778 09:13:55 -- common/autotest_common.sh@1475 -- # uname 00:04:10.778 09:13:55 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:10.778 09:13:55 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:10.778 09:13:55 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:10.778 09:13:55 -- spdk/autotest.sh@72 -- # hash lcov 00:04:10.778 09:13:55 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:10.778 09:13:55 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:10.778 --rc lcov_branch_coverage=1 00:04:10.778 --rc lcov_function_coverage=1 00:04:10.778 --rc genhtml_branch_coverage=1 00:04:10.778 --rc genhtml_function_coverage=1 00:04:10.778 --rc genhtml_legend=1 00:04:10.778 --rc geninfo_all_blocks=1 00:04:10.778 ' 00:04:10.778 09:13:55 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:10.778 --rc lcov_branch_coverage=1 00:04:10.778 --rc lcov_function_coverage=1 00:04:10.778 --rc genhtml_branch_coverage=1 00:04:10.778 --rc genhtml_function_coverage=1 00:04:10.778 --rc genhtml_legend=1 00:04:10.778 --rc geninfo_all_blocks=1 00:04:10.778 ' 00:04:10.778 09:13:55 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:10.778 --rc lcov_branch_coverage=1 00:04:10.778 --rc lcov_function_coverage=1 00:04:10.778 --rc genhtml_branch_coverage=1 00:04:10.778 --rc genhtml_function_coverage=1 00:04:10.778 --rc genhtml_legend=1 00:04:10.778 --rc geninfo_all_blocks=1 00:04:10.778 --no-external' 00:04:10.778 09:13:55 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:10.778 --rc lcov_branch_coverage=1 00:04:10.778 --rc lcov_function_coverage=1 00:04:10.778 --rc genhtml_branch_coverage=1 00:04:10.778 --rc genhtml_function_coverage=1 00:04:10.778 --rc genhtml_legend=1 00:04:10.778 --rc geninfo_all_blocks=1 00:04:10.778 --no-external' 00:04:10.778 09:13:55 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:11.036 lcov: LCOV version 1.14 00:04:11.036 09:13:55 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:16.297 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:16.297 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:16.297 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:16.297 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:16.297 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:16.297 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:16.297 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:16.298 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:16.298 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:16.298 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:16.298 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:16.298 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:16.298 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:16.298 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:16.298 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:16.298 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:16.298 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:16.298 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:16.298 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:16.298 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:16.298 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:16.298 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:16.298 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:16.298 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:16.298 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:16.298 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:16.298 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:16.298 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:16.298 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:16.298 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:16.298 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:16.298 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:16.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:16.584 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:16.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:16.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:16.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:16.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:16.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:16.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:16.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:16.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:16.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:16.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:16.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:16.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:16.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:16.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:16.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:16.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:16.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:16.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:16.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:16.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:16.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:16.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:16.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:16.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:16.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:16.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:16.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:16.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:16.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:16.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:16.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:16.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:16.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:16.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:16.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:16.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:16.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:16.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:16.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:16.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:16.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:16.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:16.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:16.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:16.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:16.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:16.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:16.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:16.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:16.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:16.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:16.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:16.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:16.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:16.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:16.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:16.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:16.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:16.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:16.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:16.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:16.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:16.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:16.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:16.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:16.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:16.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:16.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:16.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:16.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:16.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:16.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:16.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:16.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:16.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:16.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:16.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:16.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:16.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:16.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:16.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:16.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:16.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:16.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:16.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:16.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:16.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:16.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:16.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:16.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:16.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:16.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:16.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:16.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:16.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:16.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:16.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:16.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:16.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:16.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:16.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:16.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:16.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:16.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:16.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:16.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:16.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:16.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:16.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:16.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:16.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:16.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:16.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:16.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:16.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:16.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:16.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:16.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:16.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:16.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:16.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:16.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:16.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:16.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:16.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:16.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:16.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:16.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:16.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:16.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:16.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:16.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:16.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:16.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:16.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:16.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:16.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:16.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:16.844 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:16.844 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:16.844 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:16.844 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:38.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:38.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:45.317 09:14:28 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:45.317 09:14:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:45.317 09:14:28 -- common/autotest_common.sh@10 -- # set +x 00:04:45.317 09:14:28 -- spdk/autotest.sh@91 -- # rm -f 00:04:45.317 09:14:28 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:45.575 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:04:45.575 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:45.833 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:45.833 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:45.833 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:45.833 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:45.833 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:45.833 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:45.833 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:45.833 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:45.833 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:45.833 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:45.833 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:45.833 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:45.833 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:45.833 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:45.833 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:46.091 09:14:30 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:46.091 09:14:30 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:46.091 09:14:30 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:46.091 09:14:30 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:46.091 09:14:30 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:46.091 09:14:30 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:46.091 09:14:30 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:46.091 09:14:30 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:46.091 09:14:30 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:46.091 09:14:30 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:46.091 09:14:30 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:46.091 09:14:30 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:46.091 09:14:30 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:46.091 09:14:30 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:46.091 09:14:30 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:46.091 No valid GPT data, bailing 00:04:46.091 09:14:30 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:46.091 09:14:30 -- scripts/common.sh@391 -- # pt= 00:04:46.091 09:14:30 -- scripts/common.sh@392 -- # return 1 00:04:46.091 09:14:30 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:46.091 1+0 records in 00:04:46.091 1+0 records out 00:04:46.091 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00227055 s, 462 MB/s 00:04:46.091 09:14:30 -- spdk/autotest.sh@118 -- # sync 00:04:46.091 09:14:30 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:46.091 09:14:30 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:46.091 09:14:30 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:47.992 09:14:32 -- spdk/autotest.sh@124 -- # uname -s 00:04:47.992 09:14:32 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:47.992 09:14:32 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:47.992 09:14:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.992 09:14:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.992 09:14:32 -- common/autotest_common.sh@10 -- # set +x 00:04:47.992 ************************************ 00:04:47.992 START TEST setup.sh 00:04:47.992 ************************************ 00:04:47.992 09:14:32 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:47.992 * Looking for test storage... 00:04:47.992 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:47.992 09:14:32 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:47.992 09:14:32 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:47.992 09:14:32 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:47.992 09:14:32 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.992 09:14:32 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.992 09:14:32 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:47.992 ************************************ 00:04:47.992 START TEST acl 00:04:47.992 ************************************ 00:04:47.992 09:14:32 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:47.992 * Looking for test storage... 00:04:47.992 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:47.992 09:14:32 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:47.992 09:14:32 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:47.992 09:14:32 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:47.992 09:14:32 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:47.992 09:14:32 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:47.992 09:14:32 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:47.992 09:14:32 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:47.992 09:14:32 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:47.992 09:14:32 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:47.992 09:14:32 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:47.992 09:14:32 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:47.992 09:14:32 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:47.992 09:14:32 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:47.993 09:14:32 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:47.993 09:14:32 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:47.993 09:14:32 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:49.366 09:14:33 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:49.366 09:14:33 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:49.366 09:14:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:49.366 09:14:33 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:49.366 09:14:33 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.366 09:14:33 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:50.741 Hugepages 00:04:50.741 node hugesize free / total 00:04:50.741 09:14:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.742 00:04:50.742 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:50.742 09:14:34 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:50.742 09:14:34 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:50.742 09:14:34 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.742 09:14:34 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:50.742 ************************************ 00:04:50.742 START TEST denied 00:04:50.742 ************************************ 00:04:50.742 09:14:34 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:50.742 09:14:34 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:04:50.742 09:14:34 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:50.742 09:14:34 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:04:50.742 09:14:34 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:50.742 09:14:34 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:52.116 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:04:52.116 09:14:36 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:04:52.116 09:14:36 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:52.116 09:14:36 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:52.116 09:14:36 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:04:52.116 09:14:36 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:04:52.116 09:14:36 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:52.116 09:14:36 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:52.116 09:14:36 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:52.116 09:14:36 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:52.116 09:14:36 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:54.682 00:04:54.682 real 0m3.828s 00:04:54.682 user 0m1.112s 00:04:54.682 sys 0m1.818s 00:04:54.682 09:14:38 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.682 09:14:38 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:54.682 ************************************ 00:04:54.682 END TEST denied 00:04:54.682 ************************************ 00:04:54.682 09:14:38 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:54.682 09:14:38 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:54.682 09:14:38 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:54.682 09:14:38 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.682 09:14:38 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:54.682 ************************************ 00:04:54.682 START TEST allowed 00:04:54.682 ************************************ 00:04:54.682 09:14:38 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:54.682 09:14:38 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:04:54.682 09:14:38 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:54.682 09:14:38 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:04:54.682 09:14:38 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.682 09:14:38 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:57.216 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:57.217 09:14:41 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:57.217 09:14:41 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:57.217 09:14:41 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:57.217 09:14:41 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:57.217 09:14:41 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:58.590 00:04:58.590 real 0m3.936s 00:04:58.590 user 0m1.032s 00:04:58.590 sys 0m1.728s 00:04:58.590 09:14:42 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.590 09:14:42 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:58.590 ************************************ 00:04:58.591 END TEST allowed 00:04:58.591 ************************************ 00:04:58.591 09:14:42 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:58.591 00:04:58.591 real 0m10.521s 00:04:58.591 user 0m3.246s 00:04:58.591 sys 0m5.264s 00:04:58.591 09:14:42 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.591 09:14:42 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:58.591 ************************************ 00:04:58.591 END TEST acl 00:04:58.591 ************************************ 00:04:58.591 09:14:42 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:58.591 09:14:42 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:58.591 09:14:42 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.591 09:14:42 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.591 09:14:42 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:58.591 ************************************ 00:04:58.591 START TEST hugepages 00:04:58.591 ************************************ 00:04:58.591 09:14:42 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:58.591 * Looking for test storage... 00:04:58.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41659720 kB' 'MemAvailable: 45169708 kB' 'Buffers: 2704 kB' 'Cached: 12295652 kB' 'SwapCached: 0 kB' 'Active: 9309272 kB' 'Inactive: 3506552 kB' 'Active(anon): 8914920 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520764 kB' 'Mapped: 183808 kB' 'Shmem: 8397452 kB' 'KReclaimable: 205412 kB' 'Slab: 583736 kB' 'SReclaimable: 205412 kB' 'SUnreclaim: 378324 kB' 'KernelStack: 12896 kB' 'PageTables: 8392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562304 kB' 'Committed_AS: 10047332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196388 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1926748 kB' 'DirectMap2M: 15818752 kB' 'DirectMap1G: 51380224 kB' 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.591 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.592 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.593 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:58.593 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:58.593 09:14:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:58.593 09:14:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.593 09:14:42 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:58.593 09:14:42 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:58.593 09:14:42 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:58.593 09:14:42 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:58.593 09:14:42 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:58.593 09:14:42 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:58.593 09:14:42 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:58.593 09:14:42 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:58.593 09:14:42 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:58.593 09:14:42 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:58.593 09:14:42 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:58.593 09:14:42 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:58.593 09:14:42 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:58.593 09:14:42 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:58.593 09:14:42 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:58.593 09:14:42 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:58.593 09:14:42 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:58.593 09:14:42 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:58.593 09:14:42 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:58.593 09:14:42 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:58.593 09:14:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:58.593 09:14:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:58.593 09:14:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:58.593 09:14:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:58.593 09:14:42 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:58.593 09:14:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:58.593 09:14:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:58.593 09:14:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:58.593 09:14:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:58.593 09:14:42 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:58.593 09:14:42 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:58.593 09:14:42 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:58.593 09:14:42 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.593 09:14:42 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.593 09:14:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:58.593 ************************************ 00:04:58.593 START TEST default_setup 00:04:58.593 ************************************ 00:04:58.593 09:14:42 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:58.593 09:14:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:58.593 09:14:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:58.593 09:14:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:58.593 09:14:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:58.593 09:14:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:58.593 09:14:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:58.593 09:14:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:58.593 09:14:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:58.593 09:14:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:58.593 09:14:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:58.593 09:14:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:58.593 09:14:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:58.593 09:14:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:58.593 09:14:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:58.593 09:14:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:58.593 09:14:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:58.593 09:14:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:58.593 09:14:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:58.593 09:14:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:58.593 09:14:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:58.593 09:14:42 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.593 09:14:42 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:59.976 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:59.976 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:59.976 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:59.976 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:59.976 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:59.976 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:59.976 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:59.976 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:59.976 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:59.976 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:59.976 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:59.976 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:59.976 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:59.976 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:59.976 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:59.976 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:00.917 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43765628 kB' 'MemAvailable: 47275608 kB' 'Buffers: 2704 kB' 'Cached: 12295752 kB' 'SwapCached: 0 kB' 'Active: 9326492 kB' 'Inactive: 3506552 kB' 'Active(anon): 8932140 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537720 kB' 'Mapped: 183916 kB' 'Shmem: 8397552 kB' 'KReclaimable: 205396 kB' 'Slab: 583380 kB' 'SReclaimable: 205396 kB' 'SUnreclaim: 377984 kB' 'KernelStack: 12736 kB' 'PageTables: 7820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10064476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196644 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1926748 kB' 'DirectMap2M: 15818752 kB' 'DirectMap1G: 51380224 kB' 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.917 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.918 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43768556 kB' 'MemAvailable: 47278536 kB' 'Buffers: 2704 kB' 'Cached: 12295752 kB' 'SwapCached: 0 kB' 'Active: 9327068 kB' 'Inactive: 3506552 kB' 'Active(anon): 8932716 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538424 kB' 'Mapped: 183964 kB' 'Shmem: 8397552 kB' 'KReclaimable: 205396 kB' 'Slab: 583420 kB' 'SReclaimable: 205396 kB' 'SUnreclaim: 378024 kB' 'KernelStack: 12816 kB' 'PageTables: 8252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10064492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196612 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1926748 kB' 'DirectMap2M: 15818752 kB' 'DirectMap1G: 51380224 kB' 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.919 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.920 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43768976 kB' 'MemAvailable: 47278956 kB' 'Buffers: 2704 kB' 'Cached: 12295772 kB' 'SwapCached: 0 kB' 'Active: 9326672 kB' 'Inactive: 3506552 kB' 'Active(anon): 8932320 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537924 kB' 'Mapped: 183888 kB' 'Shmem: 8397572 kB' 'KReclaimable: 205396 kB' 'Slab: 583392 kB' 'SReclaimable: 205396 kB' 'SUnreclaim: 377996 kB' 'KernelStack: 12816 kB' 'PageTables: 8188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10064516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196628 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1926748 kB' 'DirectMap2M: 15818752 kB' 'DirectMap1G: 51380224 kB' 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.921 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:00.922 09:14:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:00.923 nr_hugepages=1024 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:00.923 resv_hugepages=0 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:00.923 surplus_hugepages=0 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:00.923 anon_hugepages=0 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43768220 kB' 'MemAvailable: 47278200 kB' 'Buffers: 2704 kB' 'Cached: 12295792 kB' 'SwapCached: 0 kB' 'Active: 9326676 kB' 'Inactive: 3506552 kB' 'Active(anon): 8932324 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537924 kB' 'Mapped: 183888 kB' 'Shmem: 8397592 kB' 'KReclaimable: 205396 kB' 'Slab: 583392 kB' 'SReclaimable: 205396 kB' 'SUnreclaim: 377996 kB' 'KernelStack: 12816 kB' 'PageTables: 8188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10064536 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196628 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1926748 kB' 'DirectMap2M: 15818752 kB' 'DirectMap1G: 51380224 kB' 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.923 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.924 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26204388 kB' 'MemUsed: 6625496 kB' 'SwapCached: 0 kB' 'Active: 3268924 kB' 'Inactive: 108696 kB' 'Active(anon): 3158036 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3091196 kB' 'Mapped: 38356 kB' 'AnonPages: 289556 kB' 'Shmem: 2871612 kB' 'KernelStack: 7800 kB' 'PageTables: 5080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103476 kB' 'Slab: 331316 kB' 'SReclaimable: 103476 kB' 'SUnreclaim: 227840 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.184 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:01.185 node0=1024 expecting 1024 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:01.185 00:05:01.185 real 0m2.456s 00:05:01.185 user 0m0.683s 00:05:01.185 sys 0m0.845s 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.185 09:14:45 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:01.186 ************************************ 00:05:01.186 END TEST default_setup 00:05:01.186 ************************************ 00:05:01.186 09:14:45 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:01.186 09:14:45 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:01.186 09:14:45 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.186 09:14:45 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.186 09:14:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:01.186 ************************************ 00:05:01.186 START TEST per_node_1G_alloc 00:05:01.186 ************************************ 00:05:01.186 09:14:45 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:05:01.186 09:14:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:01.186 09:14:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:05:01.186 09:14:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:01.186 09:14:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:05:01.186 09:14:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:01.186 09:14:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:05:01.186 09:14:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:01.186 09:14:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:01.186 09:14:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:01.186 09:14:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:05:01.186 09:14:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:05:01.186 09:14:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:01.186 09:14:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:01.186 09:14:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:01.186 09:14:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:01.186 09:14:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:01.186 09:14:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:05:01.186 09:14:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:01.186 09:14:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:01.186 09:14:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:01.186 09:14:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:01.186 09:14:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:01.186 09:14:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:01.186 09:14:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:05:01.186 09:14:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:01.186 09:14:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.186 09:14:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:02.120 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:02.120 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:02.120 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:02.120 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:02.120 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:02.120 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:02.120 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:02.120 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:02.120 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:02.120 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:02.120 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:02.120 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:02.384 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:02.384 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:02.384 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:02.384 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:02.384 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43778392 kB' 'MemAvailable: 47288372 kB' 'Buffers: 2704 kB' 'Cached: 12295868 kB' 'SwapCached: 0 kB' 'Active: 9333392 kB' 'Inactive: 3506552 kB' 'Active(anon): 8939040 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544020 kB' 'Mapped: 184788 kB' 'Shmem: 8397668 kB' 'KReclaimable: 205396 kB' 'Slab: 583448 kB' 'SReclaimable: 205396 kB' 'SUnreclaim: 378052 kB' 'KernelStack: 12832 kB' 'PageTables: 8292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10070848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196616 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1926748 kB' 'DirectMap2M: 15818752 kB' 'DirectMap1G: 51380224 kB' 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.384 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.385 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43783292 kB' 'MemAvailable: 47293272 kB' 'Buffers: 2704 kB' 'Cached: 12295868 kB' 'SwapCached: 0 kB' 'Active: 9327484 kB' 'Inactive: 3506552 kB' 'Active(anon): 8933132 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538092 kB' 'Mapped: 184276 kB' 'Shmem: 8397668 kB' 'KReclaimable: 205396 kB' 'Slab: 583424 kB' 'SReclaimable: 205396 kB' 'SUnreclaim: 378028 kB' 'KernelStack: 12816 kB' 'PageTables: 8160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10065836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196596 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1926748 kB' 'DirectMap2M: 15818752 kB' 'DirectMap1G: 51380224 kB' 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.386 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.387 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43783632 kB' 'MemAvailable: 47293612 kB' 'Buffers: 2704 kB' 'Cached: 12295888 kB' 'SwapCached: 0 kB' 'Active: 9329732 kB' 'Inactive: 3506552 kB' 'Active(anon): 8935380 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541136 kB' 'Mapped: 184336 kB' 'Shmem: 8397688 kB' 'KReclaimable: 205396 kB' 'Slab: 583512 kB' 'SReclaimable: 205396 kB' 'SUnreclaim: 378116 kB' 'KernelStack: 12784 kB' 'PageTables: 8048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10069292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196532 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1926748 kB' 'DirectMap2M: 15818752 kB' 'DirectMap1G: 51380224 kB' 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.388 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.389 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:02.390 nr_hugepages=1024 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:02.390 resv_hugepages=0 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:02.390 surplus_hugepages=0 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:02.390 anon_hugepages=0 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43783620 kB' 'MemAvailable: 47293600 kB' 'Buffers: 2704 kB' 'Cached: 12295920 kB' 'SwapCached: 0 kB' 'Active: 9332396 kB' 'Inactive: 3506552 kB' 'Active(anon): 8938044 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543528 kB' 'Mapped: 184816 kB' 'Shmem: 8397720 kB' 'KReclaimable: 205396 kB' 'Slab: 583496 kB' 'SReclaimable: 205396 kB' 'SUnreclaim: 378100 kB' 'KernelStack: 12848 kB' 'PageTables: 8264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10070912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196536 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1926748 kB' 'DirectMap2M: 15818752 kB' 'DirectMap1G: 51380224 kB' 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.390 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.391 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.391 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.391 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.391 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.391 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.391 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.391 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.391 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.391 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.391 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.391 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.391 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.391 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.391 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.391 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.391 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.391 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.391 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.391 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.391 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.391 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.391 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.391 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.391 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.391 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.391 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.391 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.391 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.391 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.391 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.391 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.391 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.391 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.391 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.391 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.391 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.391 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.391 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.391 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.391 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.652 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.652 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.652 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.652 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.652 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.652 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.652 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.652 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.652 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.652 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.652 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.652 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.652 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.652 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.652 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.652 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.652 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.652 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.652 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.652 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.652 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.652 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.652 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.652 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.652 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.652 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.652 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.652 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.652 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.652 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.652 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.652 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.652 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.652 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.652 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.652 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.652 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.652 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.652 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.652 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.652 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.652 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.652 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.652 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27256380 kB' 'MemUsed: 5573504 kB' 'SwapCached: 0 kB' 'Active: 3268528 kB' 'Inactive: 108696 kB' 'Active(anon): 3157640 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3091200 kB' 'Mapped: 38368 kB' 'AnonPages: 289132 kB' 'Shmem: 2871616 kB' 'KernelStack: 7784 kB' 'PageTables: 4952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103476 kB' 'Slab: 331300 kB' 'SReclaimable: 103476 kB' 'SUnreclaim: 227824 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.653 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.654 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 16527872 kB' 'MemUsed: 11183952 kB' 'SwapCached: 0 kB' 'Active: 6057852 kB' 'Inactive: 3397856 kB' 'Active(anon): 5774388 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3397856 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9207444 kB' 'Mapped: 145532 kB' 'AnonPages: 248408 kB' 'Shmem: 5526124 kB' 'KernelStack: 5032 kB' 'PageTables: 3176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 101920 kB' 'Slab: 252188 kB' 'SReclaimable: 101920 kB' 'SUnreclaim: 150268 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.655 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:02.656 node0=512 expecting 512 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:02.656 node1=512 expecting 512 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:02.656 00:05:02.656 real 0m1.449s 00:05:02.656 user 0m0.605s 00:05:02.656 sys 0m0.807s 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.656 09:14:46 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:02.656 ************************************ 00:05:02.656 END TEST per_node_1G_alloc 00:05:02.656 ************************************ 00:05:02.656 09:14:46 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:02.656 09:14:46 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:02.656 09:14:46 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:02.656 09:14:46 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.656 09:14:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:02.656 ************************************ 00:05:02.656 START TEST even_2G_alloc 00:05:02.656 ************************************ 00:05:02.656 09:14:46 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:05:02.656 09:14:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:02.656 09:14:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:02.656 09:14:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:02.656 09:14:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:02.656 09:14:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:02.656 09:14:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:02.656 09:14:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:02.656 09:14:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:02.656 09:14:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:02.656 09:14:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:02.656 09:14:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:02.656 09:14:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:02.657 09:14:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:02.657 09:14:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:02.657 09:14:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:02.657 09:14:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:02.657 09:14:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:05:02.657 09:14:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:02.657 09:14:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:02.657 09:14:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:02.657 09:14:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:02.657 09:14:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:02.657 09:14:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:02.657 09:14:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:02.657 09:14:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:02.657 09:14:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:02.657 09:14:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.657 09:14:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:03.592 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:03.592 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:03.593 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:03.593 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:03.593 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:03.593 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:03.593 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:03.593 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:03.593 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:03.593 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:03.593 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:03.593 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:03.593 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:03.593 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:03.593 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:03.593 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:03.593 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:03.857 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:03.857 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:03.857 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:03.857 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:03.857 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:03.857 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:03.857 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:03.857 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:03.857 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:03.857 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:03.857 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:03.857 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:03.857 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.857 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.857 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.857 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.857 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.857 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.857 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.857 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.857 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43771120 kB' 'MemAvailable: 47281100 kB' 'Buffers: 2704 kB' 'Cached: 12296008 kB' 'SwapCached: 0 kB' 'Active: 9327148 kB' 'Inactive: 3506552 kB' 'Active(anon): 8932796 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538116 kB' 'Mapped: 183940 kB' 'Shmem: 8397808 kB' 'KReclaimable: 205396 kB' 'Slab: 583192 kB' 'SReclaimable: 205396 kB' 'SUnreclaim: 377796 kB' 'KernelStack: 12832 kB' 'PageTables: 8136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10064988 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196676 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1926748 kB' 'DirectMap2M: 15818752 kB' 'DirectMap1G: 51380224 kB' 00:05:03.857 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.857 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.857 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.857 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.857 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.857 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.857 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.857 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.857 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.857 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.857 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.857 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.857 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.857 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.857 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.857 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.857 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.857 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.857 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.857 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.857 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.857 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.857 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.857 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.858 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43771476 kB' 'MemAvailable: 47281456 kB' 'Buffers: 2704 kB' 'Cached: 12296008 kB' 'SwapCached: 0 kB' 'Active: 9327160 kB' 'Inactive: 3506552 kB' 'Active(anon): 8932808 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538112 kB' 'Mapped: 183916 kB' 'Shmem: 8397808 kB' 'KReclaimable: 205396 kB' 'Slab: 583192 kB' 'SReclaimable: 205396 kB' 'SUnreclaim: 377796 kB' 'KernelStack: 12880 kB' 'PageTables: 8252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10065004 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196644 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1926748 kB' 'DirectMap2M: 15818752 kB' 'DirectMap1G: 51380224 kB' 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.859 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.860 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43771640 kB' 'MemAvailable: 47281620 kB' 'Buffers: 2704 kB' 'Cached: 12296028 kB' 'SwapCached: 0 kB' 'Active: 9326960 kB' 'Inactive: 3506552 kB' 'Active(anon): 8932608 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537908 kB' 'Mapped: 183916 kB' 'Shmem: 8397828 kB' 'KReclaimable: 205396 kB' 'Slab: 583224 kB' 'SReclaimable: 205396 kB' 'SUnreclaim: 377828 kB' 'KernelStack: 12848 kB' 'PageTables: 8120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10065028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196660 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1926748 kB' 'DirectMap2M: 15818752 kB' 'DirectMap1G: 51380224 kB' 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.861 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.862 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:03.863 nr_hugepages=1024 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:03.863 resv_hugepages=0 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:03.863 surplus_hugepages=0 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:03.863 anon_hugepages=0 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43772084 kB' 'MemAvailable: 47282064 kB' 'Buffers: 2704 kB' 'Cached: 12296048 kB' 'SwapCached: 0 kB' 'Active: 9327016 kB' 'Inactive: 3506552 kB' 'Active(anon): 8932664 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537980 kB' 'Mapped: 183916 kB' 'Shmem: 8397848 kB' 'KReclaimable: 205396 kB' 'Slab: 583224 kB' 'SReclaimable: 205396 kB' 'SUnreclaim: 377828 kB' 'KernelStack: 12880 kB' 'PageTables: 8232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10065048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196660 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1926748 kB' 'DirectMap2M: 15818752 kB' 'DirectMap1G: 51380224 kB' 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.863 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.864 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27247176 kB' 'MemUsed: 5582708 kB' 'SwapCached: 0 kB' 'Active: 3268488 kB' 'Inactive: 108696 kB' 'Active(anon): 3157600 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3091268 kB' 'Mapped: 37752 kB' 'AnonPages: 289112 kB' 'Shmem: 2871684 kB' 'KernelStack: 7784 kB' 'PageTables: 4948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103476 kB' 'Slab: 331160 kB' 'SReclaimable: 103476 kB' 'SUnreclaim: 227684 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.865 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:03.866 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 16527964 kB' 'MemUsed: 11183860 kB' 'SwapCached: 0 kB' 'Active: 6055928 kB' 'Inactive: 3397856 kB' 'Active(anon): 5772464 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3397856 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9207508 kB' 'Mapped: 145396 kB' 'AnonPages: 246280 kB' 'Shmem: 5526188 kB' 'KernelStack: 5064 kB' 'PageTables: 3192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 101920 kB' 'Slab: 252056 kB' 'SReclaimable: 101920 kB' 'SUnreclaim: 150136 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.867 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.868 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.127 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.127 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.127 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.127 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.127 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.127 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.127 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.127 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.127 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.127 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.127 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.127 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.127 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.127 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.127 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.127 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:04.127 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.127 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.127 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.127 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.127 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:04.127 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:04.127 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:04.127 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:04.127 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:04.127 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:04.127 node0=512 expecting 512 00:05:04.127 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:04.127 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:04.127 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:04.127 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:04.127 node1=512 expecting 512 00:05:04.127 09:14:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:04.127 00:05:04.127 real 0m1.373s 00:05:04.127 user 0m0.575s 00:05:04.127 sys 0m0.760s 00:05:04.127 09:14:48 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.128 09:14:48 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:04.128 ************************************ 00:05:04.128 END TEST even_2G_alloc 00:05:04.128 ************************************ 00:05:04.128 09:14:48 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:04.128 09:14:48 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:04.128 09:14:48 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.128 09:14:48 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.128 09:14:48 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:04.128 ************************************ 00:05:04.128 START TEST odd_alloc 00:05:04.128 ************************************ 00:05:04.128 09:14:48 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:05:04.128 09:14:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:04.128 09:14:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:04.128 09:14:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:04.128 09:14:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:04.128 09:14:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:04.128 09:14:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:04.128 09:14:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:04.128 09:14:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:04.128 09:14:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:04.128 09:14:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:04.128 09:14:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:04.128 09:14:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:04.128 09:14:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:04.128 09:14:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:04.128 09:14:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:04.128 09:14:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:04.128 09:14:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:05:04.128 09:14:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:04.128 09:14:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:04.128 09:14:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:05:04.128 09:14:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:04.128 09:14:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:04.128 09:14:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:04.128 09:14:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:04.128 09:14:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:04.128 09:14:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:04.128 09:14:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:04.128 09:14:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:05.063 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:05.063 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:05.063 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:05.064 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:05.064 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:05.064 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:05.064 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:05.064 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:05.064 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:05.064 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:05.064 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:05.064 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:05.064 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:05.064 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:05.064 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:05.064 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:05.064 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:05.328 09:14:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:05.328 09:14:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:05.328 09:14:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:05.328 09:14:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:05.328 09:14:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:05.328 09:14:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:05.328 09:14:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:05.328 09:14:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:05.328 09:14:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:05.328 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:05.328 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:05.328 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:05.328 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.328 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.328 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.328 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.328 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.328 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.328 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.328 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43764884 kB' 'MemAvailable: 47274864 kB' 'Buffers: 2704 kB' 'Cached: 12296136 kB' 'SwapCached: 0 kB' 'Active: 9323804 kB' 'Inactive: 3506552 kB' 'Active(anon): 8929452 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534644 kB' 'Mapped: 183104 kB' 'Shmem: 8397936 kB' 'KReclaimable: 205396 kB' 'Slab: 583028 kB' 'SReclaimable: 205396 kB' 'SUnreclaim: 377632 kB' 'KernelStack: 12800 kB' 'PageTables: 7828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 10051196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196596 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1926748 kB' 'DirectMap2M: 15818752 kB' 'DirectMap1G: 51380224 kB' 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.329 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43765004 kB' 'MemAvailable: 47274984 kB' 'Buffers: 2704 kB' 'Cached: 12296140 kB' 'SwapCached: 0 kB' 'Active: 9323532 kB' 'Inactive: 3506552 kB' 'Active(anon): 8929180 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534344 kB' 'Mapped: 183056 kB' 'Shmem: 8397940 kB' 'KReclaimable: 205396 kB' 'Slab: 583028 kB' 'SReclaimable: 205396 kB' 'SUnreclaim: 377632 kB' 'KernelStack: 12832 kB' 'PageTables: 7908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 10051212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196564 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1926748 kB' 'DirectMap2M: 15818752 kB' 'DirectMap1G: 51380224 kB' 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.330 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.331 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43764436 kB' 'MemAvailable: 47274416 kB' 'Buffers: 2704 kB' 'Cached: 12296156 kB' 'SwapCached: 0 kB' 'Active: 9323724 kB' 'Inactive: 3506552 kB' 'Active(anon): 8929372 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534568 kB' 'Mapped: 183056 kB' 'Shmem: 8397956 kB' 'KReclaimable: 205396 kB' 'Slab: 583036 kB' 'SReclaimable: 205396 kB' 'SUnreclaim: 377640 kB' 'KernelStack: 12832 kB' 'PageTables: 7928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 10051232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196548 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1926748 kB' 'DirectMap2M: 15818752 kB' 'DirectMap1G: 51380224 kB' 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.332 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.333 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:05.334 nr_hugepages=1025 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:05.334 resv_hugepages=0 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:05.334 surplus_hugepages=0 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:05.334 anon_hugepages=0 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43764436 kB' 'MemAvailable: 47274416 kB' 'Buffers: 2704 kB' 'Cached: 12296172 kB' 'SwapCached: 0 kB' 'Active: 9324048 kB' 'Inactive: 3506552 kB' 'Active(anon): 8929696 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534896 kB' 'Mapped: 183056 kB' 'Shmem: 8397972 kB' 'KReclaimable: 205396 kB' 'Slab: 583036 kB' 'SReclaimable: 205396 kB' 'SUnreclaim: 377640 kB' 'KernelStack: 12848 kB' 'PageTables: 7984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 10051256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196548 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1926748 kB' 'DirectMap2M: 15818752 kB' 'DirectMap1G: 51380224 kB' 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.334 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.335 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27229960 kB' 'MemUsed: 5599924 kB' 'SwapCached: 0 kB' 'Active: 3268176 kB' 'Inactive: 108696 kB' 'Active(anon): 3157288 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3091360 kB' 'Mapped: 37660 kB' 'AnonPages: 288680 kB' 'Shmem: 2871776 kB' 'KernelStack: 7784 kB' 'PageTables: 4784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103476 kB' 'Slab: 331164 kB' 'SReclaimable: 103476 kB' 'SUnreclaim: 227688 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.336 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.337 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 16534852 kB' 'MemUsed: 11176972 kB' 'SwapCached: 0 kB' 'Active: 6055680 kB' 'Inactive: 3397856 kB' 'Active(anon): 5772216 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3397856 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9207520 kB' 'Mapped: 145396 kB' 'AnonPages: 246080 kB' 'Shmem: 5526200 kB' 'KernelStack: 5032 kB' 'PageTables: 3104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 101920 kB' 'Slab: 251872 kB' 'SReclaimable: 101920 kB' 'SUnreclaim: 149952 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.338 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:05:05.339 node0=512 expecting 513 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:05:05.339 node1=513 expecting 512 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:05:05.339 00:05:05.339 real 0m1.337s 00:05:05.339 user 0m0.560s 00:05:05.339 sys 0m0.738s 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.339 09:14:49 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:05.339 ************************************ 00:05:05.339 END TEST odd_alloc 00:05:05.339 ************************************ 00:05:05.339 09:14:49 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:05.339 09:14:49 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:05.339 09:14:49 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:05.339 09:14:49 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.339 09:14:49 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:05.339 ************************************ 00:05:05.339 START TEST custom_alloc 00:05:05.339 ************************************ 00:05:05.339 09:14:49 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:05:05.339 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:05.339 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:05.339 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:05.339 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:05.339 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:05.339 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:05.339 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:05.339 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:05.339 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:05.339 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:05.339 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:05.339 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:05.339 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:05.339 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:05.339 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:05.339 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:05.339 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:05.339 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:05.340 09:14:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:06.717 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:06.717 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:06.717 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:06.717 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:06.717 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:06.717 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:06.717 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:06.717 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:06.717 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:06.717 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:06.717 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:06.718 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:06.718 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:06.718 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:06.718 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:06.718 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:06.718 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42727508 kB' 'MemAvailable: 46237488 kB' 'Buffers: 2704 kB' 'Cached: 12296272 kB' 'SwapCached: 0 kB' 'Active: 9324124 kB' 'Inactive: 3506552 kB' 'Active(anon): 8929772 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534904 kB' 'Mapped: 183120 kB' 'Shmem: 8398072 kB' 'KReclaimable: 205396 kB' 'Slab: 582688 kB' 'SReclaimable: 205396 kB' 'SUnreclaim: 377292 kB' 'KernelStack: 12816 kB' 'PageTables: 7876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 10051588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196628 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1926748 kB' 'DirectMap2M: 15818752 kB' 'DirectMap1G: 51380224 kB' 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.718 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42729632 kB' 'MemAvailable: 46239612 kB' 'Buffers: 2704 kB' 'Cached: 12296272 kB' 'SwapCached: 0 kB' 'Active: 9324260 kB' 'Inactive: 3506552 kB' 'Active(anon): 8929908 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535068 kB' 'Mapped: 183152 kB' 'Shmem: 8398072 kB' 'KReclaimable: 205396 kB' 'Slab: 582736 kB' 'SReclaimable: 205396 kB' 'SUnreclaim: 377340 kB' 'KernelStack: 12816 kB' 'PageTables: 7872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 10051604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196580 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1926748 kB' 'DirectMap2M: 15818752 kB' 'DirectMap1G: 51380224 kB' 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.719 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.720 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.721 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42729472 kB' 'MemAvailable: 46239452 kB' 'Buffers: 2704 kB' 'Cached: 12296288 kB' 'SwapCached: 0 kB' 'Active: 9324468 kB' 'Inactive: 3506552 kB' 'Active(anon): 8930116 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535224 kB' 'Mapped: 183076 kB' 'Shmem: 8398088 kB' 'KReclaimable: 205396 kB' 'Slab: 582716 kB' 'SReclaimable: 205396 kB' 'SUnreclaim: 377320 kB' 'KernelStack: 12832 kB' 'PageTables: 7924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 10051628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196580 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1926748 kB' 'DirectMap2M: 15818752 kB' 'DirectMap1G: 51380224 kB' 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.722 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:05:06.723 nr_hugepages=1536 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:06.723 resv_hugepages=0 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:06.723 surplus_hugepages=0 00:05:06.723 09:14:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:06.723 anon_hugepages=0 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42729472 kB' 'MemAvailable: 46239452 kB' 'Buffers: 2704 kB' 'Cached: 12296308 kB' 'SwapCached: 0 kB' 'Active: 9324144 kB' 'Inactive: 3506552 kB' 'Active(anon): 8929792 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534880 kB' 'Mapped: 183076 kB' 'Shmem: 8398108 kB' 'KReclaimable: 205396 kB' 'Slab: 582712 kB' 'SReclaimable: 205396 kB' 'SUnreclaim: 377316 kB' 'KernelStack: 12800 kB' 'PageTables: 7812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 10051648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196580 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1926748 kB' 'DirectMap2M: 15818752 kB' 'DirectMap1G: 51380224 kB' 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.724 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:06.725 09:14:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27235548 kB' 'MemUsed: 5594336 kB' 'SwapCached: 0 kB' 'Active: 3268320 kB' 'Inactive: 108696 kB' 'Active(anon): 3157432 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3091500 kB' 'Mapped: 37680 kB' 'AnonPages: 288620 kB' 'Shmem: 2871916 kB' 'KernelStack: 7768 kB' 'PageTables: 4692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103476 kB' 'Slab: 330900 kB' 'SReclaimable: 103476 kB' 'SUnreclaim: 227424 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.726 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.019 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.019 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.019 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.019 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.019 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.019 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.019 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.019 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.019 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 15497728 kB' 'MemUsed: 12214096 kB' 'SwapCached: 0 kB' 'Active: 6055892 kB' 'Inactive: 3397856 kB' 'Active(anon): 5772428 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3397856 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9207540 kB' 'Mapped: 145396 kB' 'AnonPages: 246300 kB' 'Shmem: 5526220 kB' 'KernelStack: 5064 kB' 'PageTables: 3176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 101920 kB' 'Slab: 251812 kB' 'SReclaimable: 101920 kB' 'SUnreclaim: 149892 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.020 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.021 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.022 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.022 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.022 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.022 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.022 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.022 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.022 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.022 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.022 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.022 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.022 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.022 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.022 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.022 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.022 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:07.022 09:14:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:07.022 09:14:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:07.022 09:14:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:07.022 09:14:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:07.022 09:14:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:07.022 09:14:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:07.022 node0=512 expecting 512 00:05:07.022 09:14:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:07.022 09:14:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:07.022 09:14:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:07.022 09:14:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:05:07.022 node1=1024 expecting 1024 00:05:07.022 09:14:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:05:07.022 00:05:07.022 real 0m1.464s 00:05:07.022 user 0m0.613s 00:05:07.022 sys 0m0.812s 00:05:07.022 09:14:51 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.022 09:14:51 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:07.022 ************************************ 00:05:07.022 END TEST custom_alloc 00:05:07.022 ************************************ 00:05:07.022 09:14:51 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:07.022 09:14:51 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:07.022 09:14:51 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.022 09:14:51 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.022 09:14:51 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:07.022 ************************************ 00:05:07.022 START TEST no_shrink_alloc 00:05:07.022 ************************************ 00:05:07.022 09:14:51 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:05:07.022 09:14:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:07.022 09:14:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:07.022 09:14:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:07.022 09:14:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:07.022 09:14:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:07.022 09:14:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:07.022 09:14:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:07.022 09:14:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:07.022 09:14:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:07.022 09:14:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:07.022 09:14:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:07.022 09:14:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:07.022 09:14:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:07.022 09:14:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:07.022 09:14:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:07.022 09:14:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:07.022 09:14:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:07.022 09:14:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:07.022 09:14:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:07.022 09:14:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:07.022 09:14:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:07.022 09:14:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:07.956 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:07.956 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:07.956 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:07.956 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:07.956 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:07.956 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:07.956 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:07.956 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:07.956 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:07.956 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:07.956 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:07.956 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:07.956 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:07.956 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:07.956 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:07.956 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:07.956 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:08.220 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:08.220 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:08.220 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:08.220 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:08.220 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:08.220 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:08.220 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:08.220 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:08.220 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:08.220 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:08.220 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:08.220 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:08.220 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.220 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.220 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.220 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.220 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.220 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.220 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.220 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.220 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43737836 kB' 'MemAvailable: 47247816 kB' 'Buffers: 2704 kB' 'Cached: 12296396 kB' 'SwapCached: 0 kB' 'Active: 9330412 kB' 'Inactive: 3506552 kB' 'Active(anon): 8936060 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541084 kB' 'Mapped: 183944 kB' 'Shmem: 8398196 kB' 'KReclaimable: 205396 kB' 'Slab: 582772 kB' 'SReclaimable: 205396 kB' 'SUnreclaim: 377376 kB' 'KernelStack: 12832 kB' 'PageTables: 7904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10058044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196616 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1926748 kB' 'DirectMap2M: 15818752 kB' 'DirectMap1G: 51380224 kB' 00:05:08.220 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.220 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.220 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.220 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.220 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.220 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.220 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.220 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.220 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.220 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.220 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.220 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.221 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43738440 kB' 'MemAvailable: 47248420 kB' 'Buffers: 2704 kB' 'Cached: 12296396 kB' 'SwapCached: 0 kB' 'Active: 9324644 kB' 'Inactive: 3506552 kB' 'Active(anon): 8930292 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535284 kB' 'Mapped: 183500 kB' 'Shmem: 8398196 kB' 'KReclaimable: 205396 kB' 'Slab: 582756 kB' 'SReclaimable: 205396 kB' 'SUnreclaim: 377360 kB' 'KernelStack: 12832 kB' 'PageTables: 7876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10051940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196596 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1926748 kB' 'DirectMap2M: 15818752 kB' 'DirectMap1G: 51380224 kB' 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.222 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.223 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:08.224 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43737684 kB' 'MemAvailable: 47247664 kB' 'Buffers: 2704 kB' 'Cached: 12296420 kB' 'SwapCached: 0 kB' 'Active: 9324272 kB' 'Inactive: 3506552 kB' 'Active(anon): 8929920 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534924 kB' 'Mapped: 183084 kB' 'Shmem: 8398220 kB' 'KReclaimable: 205396 kB' 'Slab: 582828 kB' 'SReclaimable: 205396 kB' 'SUnreclaim: 377432 kB' 'KernelStack: 12848 kB' 'PageTables: 7928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10051964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196580 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1926748 kB' 'DirectMap2M: 15818752 kB' 'DirectMap1G: 51380224 kB' 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.225 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.226 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:08.227 nr_hugepages=1024 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:08.227 resv_hugepages=0 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:08.227 surplus_hugepages=0 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:08.227 anon_hugepages=0 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43737684 kB' 'MemAvailable: 47247664 kB' 'Buffers: 2704 kB' 'Cached: 12296440 kB' 'SwapCached: 0 kB' 'Active: 9324560 kB' 'Inactive: 3506552 kB' 'Active(anon): 8930208 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535212 kB' 'Mapped: 183084 kB' 'Shmem: 8398240 kB' 'KReclaimable: 205396 kB' 'Slab: 582828 kB' 'SReclaimable: 205396 kB' 'SUnreclaim: 377432 kB' 'KernelStack: 12832 kB' 'PageTables: 7876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10051984 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196564 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1926748 kB' 'DirectMap2M: 15818752 kB' 'DirectMap1G: 51380224 kB' 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.227 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.228 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:08.229 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26171628 kB' 'MemUsed: 6658256 kB' 'SwapCached: 0 kB' 'Active: 3269028 kB' 'Inactive: 108696 kB' 'Active(anon): 3158140 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3091612 kB' 'Mapped: 37688 kB' 'AnonPages: 289296 kB' 'Shmem: 2872028 kB' 'KernelStack: 7800 kB' 'PageTables: 4804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103476 kB' 'Slab: 331060 kB' 'SReclaimable: 103476 kB' 'SUnreclaim: 227584 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.230 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:08.231 node0=1024 expecting 1024 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:08.231 09:14:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:09.612 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:09.612 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:09.612 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:09.612 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:09.612 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:09.612 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:09.612 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:09.612 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:09.613 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:09.613 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:09.613 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:09.613 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:09.613 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:09.613 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:09.613 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:09.613 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:09.613 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:09.613 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43750960 kB' 'MemAvailable: 47260940 kB' 'Buffers: 2704 kB' 'Cached: 12296508 kB' 'SwapCached: 0 kB' 'Active: 9324724 kB' 'Inactive: 3506552 kB' 'Active(anon): 8930372 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535368 kB' 'Mapped: 183228 kB' 'Shmem: 8398308 kB' 'KReclaimable: 205396 kB' 'Slab: 583104 kB' 'SReclaimable: 205396 kB' 'SUnreclaim: 377708 kB' 'KernelStack: 12880 kB' 'PageTables: 7988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10052160 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196644 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1926748 kB' 'DirectMap2M: 15818752 kB' 'DirectMap1G: 51380224 kB' 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.613 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.614 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43751768 kB' 'MemAvailable: 47261748 kB' 'Buffers: 2704 kB' 'Cached: 12296508 kB' 'SwapCached: 0 kB' 'Active: 9325156 kB' 'Inactive: 3506552 kB' 'Active(anon): 8930804 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535824 kB' 'Mapped: 183200 kB' 'Shmem: 8398308 kB' 'KReclaimable: 205396 kB' 'Slab: 583092 kB' 'SReclaimable: 205396 kB' 'SUnreclaim: 377696 kB' 'KernelStack: 12896 kB' 'PageTables: 7976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10052180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196644 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1926748 kB' 'DirectMap2M: 15818752 kB' 'DirectMap1G: 51380224 kB' 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.615 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.616 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43752536 kB' 'MemAvailable: 47262516 kB' 'Buffers: 2704 kB' 'Cached: 12296524 kB' 'SwapCached: 0 kB' 'Active: 9324832 kB' 'Inactive: 3506552 kB' 'Active(anon): 8930480 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535404 kB' 'Mapped: 183124 kB' 'Shmem: 8398324 kB' 'KReclaimable: 205396 kB' 'Slab: 583108 kB' 'SReclaimable: 205396 kB' 'SUnreclaim: 377712 kB' 'KernelStack: 12880 kB' 'PageTables: 7920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10052200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196628 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1926748 kB' 'DirectMap2M: 15818752 kB' 'DirectMap1G: 51380224 kB' 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.617 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.618 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:09.619 nr_hugepages=1024 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:09.619 resv_hugepages=0 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:09.619 surplus_hugepages=0 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:09.619 anon_hugepages=0 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43753008 kB' 'MemAvailable: 47262988 kB' 'Buffers: 2704 kB' 'Cached: 12296552 kB' 'SwapCached: 0 kB' 'Active: 9324780 kB' 'Inactive: 3506552 kB' 'Active(anon): 8930428 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535332 kB' 'Mapped: 183124 kB' 'Shmem: 8398352 kB' 'KReclaimable: 205396 kB' 'Slab: 583100 kB' 'SReclaimable: 205396 kB' 'SUnreclaim: 377704 kB' 'KernelStack: 12880 kB' 'PageTables: 7924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10052224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196628 kB' 'VmallocChunk: 0 kB' 'Percpu: 38784 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1926748 kB' 'DirectMap2M: 15818752 kB' 'DirectMap1G: 51380224 kB' 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.619 09:14:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.619 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.619 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.620 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26168680 kB' 'MemUsed: 6661204 kB' 'SwapCached: 0 kB' 'Active: 3269096 kB' 'Inactive: 108696 kB' 'Active(anon): 3158208 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3091656 kB' 'Mapped: 37728 kB' 'AnonPages: 289296 kB' 'Shmem: 2872072 kB' 'KernelStack: 7800 kB' 'PageTables: 4780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103476 kB' 'Slab: 331120 kB' 'SReclaimable: 103476 kB' 'SUnreclaim: 227644 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.621 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:09.622 node0=1024 expecting 1024 00:05:09.622 09:14:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:09.623 00:05:09.623 real 0m2.794s 00:05:09.623 user 0m1.196s 00:05:09.623 sys 0m1.521s 00:05:09.623 09:14:54 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.623 09:14:54 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:09.623 ************************************ 00:05:09.623 END TEST no_shrink_alloc 00:05:09.623 ************************************ 00:05:09.623 09:14:54 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:09.623 09:14:54 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:09.623 09:14:54 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:09.881 09:14:54 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:09.881 09:14:54 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:09.881 09:14:54 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:09.881 09:14:54 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:09.881 09:14:54 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:09.881 09:14:54 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:09.881 09:14:54 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:09.881 09:14:54 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:09.881 09:14:54 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:09.881 09:14:54 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:09.881 09:14:54 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:09.881 09:14:54 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:09.881 00:05:09.881 real 0m11.270s 00:05:09.881 user 0m4.396s 00:05:09.881 sys 0m5.740s 00:05:09.881 09:14:54 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.881 09:14:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:09.881 ************************************ 00:05:09.881 END TEST hugepages 00:05:09.881 ************************************ 00:05:09.881 09:14:54 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:09.881 09:14:54 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:09.881 09:14:54 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.881 09:14:54 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.881 09:14:54 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:09.881 ************************************ 00:05:09.881 START TEST driver 00:05:09.881 ************************************ 00:05:09.881 09:14:54 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:09.881 * Looking for test storage... 00:05:09.881 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:09.881 09:14:54 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:09.881 09:14:54 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:09.881 09:14:54 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:12.406 09:14:56 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:12.406 09:14:56 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.406 09:14:56 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.406 09:14:56 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:12.406 ************************************ 00:05:12.406 START TEST guess_driver 00:05:12.406 ************************************ 00:05:12.406 09:14:56 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:05:12.407 09:14:56 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:12.407 09:14:56 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:12.407 09:14:56 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:12.407 09:14:56 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:12.407 09:14:56 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:12.407 09:14:56 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:12.407 09:14:56 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:12.407 09:14:56 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:12.407 09:14:56 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:12.407 09:14:56 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:05:12.407 09:14:56 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:12.407 09:14:56 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:05:12.407 09:14:56 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:05:12.407 09:14:56 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:12.407 09:14:56 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:12.407 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:12.407 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:12.407 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:12.407 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:12.407 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:12.407 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:12.407 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:12.407 09:14:56 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:05:12.407 09:14:56 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:05:12.407 09:14:56 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:12.407 09:14:56 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:12.407 09:14:56 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:12.407 Looking for driver=vfio-pci 00:05:12.407 09:14:56 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.407 09:14:56 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:12.407 09:14:56 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:12.407 09:14:56 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:13.781 09:14:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:13.781 09:14:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:13.781 09:14:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:13.781 09:14:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:13.781 09:14:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:13.781 09:14:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:13.781 09:14:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:13.781 09:14:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:13.781 09:14:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:13.781 09:14:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:13.781 09:14:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:13.781 09:14:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:13.781 09:14:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:13.781 09:14:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:13.781 09:14:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:13.781 09:14:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:13.781 09:14:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:13.781 09:14:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:13.781 09:14:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:13.781 09:14:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:13.781 09:14:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:13.781 09:14:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:13.781 09:14:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:13.781 09:14:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:13.781 09:14:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:13.781 09:14:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:13.781 09:14:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:13.781 09:14:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:13.781 09:14:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:13.781 09:14:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:13.782 09:14:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:13.782 09:14:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:13.782 09:14:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:13.782 09:14:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:13.782 09:14:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:13.782 09:14:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:13.782 09:14:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:13.782 09:14:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:13.782 09:14:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:13.782 09:14:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:13.782 09:14:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:13.782 09:14:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:13.782 09:14:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:13.782 09:14:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:13.782 09:14:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:13.782 09:14:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:13.782 09:14:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:13.782 09:14:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:14.720 09:14:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:14.720 09:14:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:14.720 09:14:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:14.720 09:14:59 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:14.720 09:14:59 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:14.720 09:14:59 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:14.720 09:14:59 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:17.253 00:05:17.253 real 0m4.833s 00:05:17.253 user 0m1.092s 00:05:17.253 sys 0m1.847s 00:05:17.253 09:15:01 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.253 09:15:01 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:17.253 ************************************ 00:05:17.253 END TEST guess_driver 00:05:17.253 ************************************ 00:05:17.253 09:15:01 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:05:17.253 00:05:17.253 real 0m7.523s 00:05:17.253 user 0m1.679s 00:05:17.253 sys 0m2.953s 00:05:17.253 09:15:01 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.253 09:15:01 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:17.253 ************************************ 00:05:17.253 END TEST driver 00:05:17.253 ************************************ 00:05:17.253 09:15:01 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:17.254 09:15:01 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:17.254 09:15:01 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.254 09:15:01 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.254 09:15:01 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:17.254 ************************************ 00:05:17.254 START TEST devices 00:05:17.254 ************************************ 00:05:17.254 09:15:01 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:17.512 * Looking for test storage... 00:05:17.512 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:17.512 09:15:01 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:17.512 09:15:01 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:17.512 09:15:01 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:17.512 09:15:01 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:18.888 09:15:03 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:18.888 09:15:03 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:18.888 09:15:03 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:18.888 09:15:03 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:18.888 09:15:03 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:18.888 09:15:03 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:18.888 09:15:03 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:18.888 09:15:03 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:18.888 09:15:03 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:18.888 09:15:03 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:18.888 09:15:03 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:18.888 09:15:03 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:18.888 09:15:03 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:18.888 09:15:03 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:18.888 09:15:03 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:18.888 09:15:03 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:18.888 09:15:03 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:18.888 09:15:03 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:05:18.888 09:15:03 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:05:18.888 09:15:03 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:18.888 09:15:03 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:18.888 09:15:03 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:18.888 No valid GPT data, bailing 00:05:18.888 09:15:03 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:18.888 09:15:03 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:18.888 09:15:03 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:18.888 09:15:03 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:18.888 09:15:03 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:18.888 09:15:03 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:18.888 09:15:03 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:05:18.888 09:15:03 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:05:18.888 09:15:03 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:18.888 09:15:03 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:05:18.888 09:15:03 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:18.888 09:15:03 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:18.888 09:15:03 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:18.888 09:15:03 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.888 09:15:03 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.888 09:15:03 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:18.888 ************************************ 00:05:18.888 START TEST nvme_mount 00:05:18.888 ************************************ 00:05:18.888 09:15:03 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:05:18.888 09:15:03 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:18.888 09:15:03 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:18.888 09:15:03 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:18.888 09:15:03 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:18.888 09:15:03 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:18.888 09:15:03 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:18.888 09:15:03 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:18.888 09:15:03 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:18.888 09:15:03 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:18.888 09:15:03 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:18.888 09:15:03 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:18.888 09:15:03 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:18.888 09:15:03 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:18.888 09:15:03 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:18.888 09:15:03 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:18.888 09:15:03 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:18.888 09:15:03 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:18.888 09:15:03 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:18.888 09:15:03 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:19.824 Creating new GPT entries in memory. 00:05:19.824 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:19.824 other utilities. 00:05:19.824 09:15:04 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:19.824 09:15:04 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:19.824 09:15:04 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:19.824 09:15:04 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:19.824 09:15:04 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:21.201 Creating new GPT entries in memory. 00:05:21.201 The operation has completed successfully. 00:05:21.201 09:15:05 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:21.201 09:15:05 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:21.201 09:15:05 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 597018 00:05:21.201 09:15:05 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:21.201 09:15:05 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:21.201 09:15:05 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:21.201 09:15:05 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:21.201 09:15:05 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:21.201 09:15:05 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:21.201 09:15:05 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:21.201 09:15:05 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:21.201 09:15:05 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:21.201 09:15:05 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:21.201 09:15:05 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:21.201 09:15:05 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:21.201 09:15:05 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:21.201 09:15:05 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:21.201 09:15:05 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:21.201 09:15:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.201 09:15:05 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:21.201 09:15:05 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:21.201 09:15:05 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:21.201 09:15:05 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:22.135 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.135 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:22.135 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:22.135 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.135 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.135 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.135 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.135 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.135 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.135 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.135 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.135 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.135 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.135 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.135 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.135 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.135 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.135 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.135 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.136 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.136 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.136 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.136 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.136 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.136 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.136 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.136 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.136 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.136 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.136 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.136 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.136 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.136 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.136 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.136 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.136 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.393 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:22.393 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:22.393 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:22.393 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:22.393 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:22.393 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:22.394 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:22.394 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:22.394 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:22.394 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:22.394 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:22.394 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:22.394 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:22.651 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:22.651 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:22.651 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:22.651 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:22.651 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:22.651 09:15:06 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:22.651 09:15:06 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:22.651 09:15:06 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:22.651 09:15:06 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:22.651 09:15:06 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:22.651 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:22.651 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:22.651 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:22.651 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:22.651 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:22.651 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:22.651 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:22.651 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:22.651 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:22.651 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.651 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:22.651 09:15:06 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:22.652 09:15:06 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:22.652 09:15:06 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:24.037 09:15:08 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:24.988 09:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.988 09:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:24.988 09:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:24.988 09:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.988 09:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.988 09:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.988 09:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.988 09:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.988 09:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.988 09:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.988 09:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.988 09:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.988 09:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.988 09:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.988 09:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.988 09:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.988 09:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.988 09:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.988 09:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.988 09:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.988 09:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.988 09:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.988 09:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.988 09:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.988 09:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.988 09:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.988 09:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.988 09:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.988 09:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.988 09:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.988 09:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.988 09:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.988 09:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.988 09:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.988 09:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:24.988 09:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.247 09:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:25.247 09:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:25.247 09:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:25.247 09:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:25.247 09:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:25.247 09:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:25.247 09:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:25.247 09:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:25.247 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:25.247 00:05:25.247 real 0m6.316s 00:05:25.247 user 0m1.503s 00:05:25.247 sys 0m2.384s 00:05:25.248 09:15:09 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.248 09:15:09 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:25.248 ************************************ 00:05:25.248 END TEST nvme_mount 00:05:25.248 ************************************ 00:05:25.248 09:15:09 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:25.248 09:15:09 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:25.248 09:15:09 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:25.248 09:15:09 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.248 09:15:09 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:25.248 ************************************ 00:05:25.248 START TEST dm_mount 00:05:25.248 ************************************ 00:05:25.248 09:15:09 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:05:25.248 09:15:09 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:25.248 09:15:09 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:25.248 09:15:09 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:25.248 09:15:09 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:25.248 09:15:09 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:25.248 09:15:09 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:25.248 09:15:09 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:25.248 09:15:09 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:25.248 09:15:09 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:25.248 09:15:09 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:25.248 09:15:09 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:25.248 09:15:09 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:25.248 09:15:09 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:25.248 09:15:09 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:25.248 09:15:09 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:25.248 09:15:09 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:25.248 09:15:09 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:25.248 09:15:09 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:25.248 09:15:09 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:25.248 09:15:09 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:25.248 09:15:09 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:26.183 Creating new GPT entries in memory. 00:05:26.183 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:26.183 other utilities. 00:05:26.183 09:15:10 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:26.183 09:15:10 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:26.183 09:15:10 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:26.183 09:15:10 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:26.183 09:15:10 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:27.563 Creating new GPT entries in memory. 00:05:27.563 The operation has completed successfully. 00:05:27.563 09:15:11 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:27.563 09:15:11 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:27.563 09:15:11 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:27.563 09:15:11 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:27.563 09:15:11 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:28.500 The operation has completed successfully. 00:05:28.500 09:15:12 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:28.500 09:15:12 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:28.500 09:15:12 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 599910 00:05:28.500 09:15:12 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:28.500 09:15:12 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:28.500 09:15:12 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:28.500 09:15:12 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:28.500 09:15:12 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:28.500 09:15:12 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:28.500 09:15:12 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:28.500 09:15:12 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:28.500 09:15:12 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:28.500 09:15:12 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:28.500 09:15:12 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:28.500 09:15:12 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:28.500 09:15:12 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:28.500 09:15:12 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:28.500 09:15:12 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:28.500 09:15:12 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:28.500 09:15:12 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:28.500 09:15:12 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:28.500 09:15:12 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:28.500 09:15:12 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:28.500 09:15:12 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:28.500 09:15:12 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:28.500 09:15:12 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:28.500 09:15:12 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:28.500 09:15:12 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:28.500 09:15:12 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:28.500 09:15:12 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:28.500 09:15:12 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:28.500 09:15:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.500 09:15:12 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:28.500 09:15:12 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:28.500 09:15:12 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:28.500 09:15:12 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:29.434 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:29.434 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:29.434 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:29.435 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.435 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:29.435 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.435 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:29.435 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.435 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:29.435 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.435 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:29.435 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.435 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:29.435 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.435 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:29.435 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.435 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:29.435 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.435 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:29.435 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.435 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:29.435 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.435 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:29.435 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.435 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:29.435 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.435 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:29.435 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.435 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:29.435 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.435 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:29.435 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.435 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:29.435 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.435 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:29.435 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.693 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:29.693 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:29.693 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:29.693 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:29.693 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:29.693 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:29.693 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:29.693 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:29.693 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:29.693 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:29.693 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:29.693 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:29.693 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:29.693 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:29.693 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.693 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:29.693 09:15:13 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:29.693 09:15:13 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:29.693 09:15:13 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:30.628 09:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.628 09:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:30.628 09:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:30.628 09:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.628 09:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.628 09:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.628 09:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.628 09:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.628 09:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.628 09:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.628 09:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.628 09:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.628 09:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.628 09:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.628 09:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.628 09:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.628 09:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.628 09:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.628 09:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.628 09:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.628 09:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.628 09:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.628 09:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.628 09:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.628 09:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.628 09:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.628 09:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.628 09:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.628 09:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.628 09:15:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.628 09:15:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.628 09:15:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.628 09:15:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.628 09:15:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.628 09:15:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:30.628 09:15:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.887 09:15:15 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:30.887 09:15:15 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:30.887 09:15:15 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:30.887 09:15:15 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:30.887 09:15:15 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:30.887 09:15:15 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:30.887 09:15:15 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:30.887 09:15:15 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:30.887 09:15:15 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:30.887 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:30.887 09:15:15 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:30.887 09:15:15 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:30.887 00:05:30.887 real 0m5.639s 00:05:30.887 user 0m0.939s 00:05:30.887 sys 0m1.567s 00:05:30.887 09:15:15 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.887 09:15:15 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:30.887 ************************************ 00:05:30.887 END TEST dm_mount 00:05:30.887 ************************************ 00:05:30.887 09:15:15 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:30.887 09:15:15 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:30.887 09:15:15 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:30.887 09:15:15 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:30.887 09:15:15 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:30.887 09:15:15 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:30.887 09:15:15 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:30.887 09:15:15 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:31.146 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:31.146 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:31.146 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:31.146 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:31.146 09:15:15 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:31.146 09:15:15 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:31.146 09:15:15 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:31.146 09:15:15 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:31.146 09:15:15 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:31.146 09:15:15 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:31.146 09:15:15 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:31.146 00:05:31.146 real 0m13.858s 00:05:31.146 user 0m3.062s 00:05:31.146 sys 0m4.996s 00:05:31.146 09:15:15 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.146 09:15:15 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:31.146 ************************************ 00:05:31.146 END TEST devices 00:05:31.146 ************************************ 00:05:31.146 09:15:15 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:31.146 00:05:31.146 real 0m43.413s 00:05:31.146 user 0m12.475s 00:05:31.146 sys 0m19.116s 00:05:31.146 09:15:15 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.146 09:15:15 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:31.146 ************************************ 00:05:31.146 END TEST setup.sh 00:05:31.146 ************************************ 00:05:31.146 09:15:15 -- common/autotest_common.sh@1142 -- # return 0 00:05:31.146 09:15:15 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:32.520 Hugepages 00:05:32.520 node hugesize free / total 00:05:32.520 node0 1048576kB 0 / 0 00:05:32.520 node0 2048kB 2048 / 2048 00:05:32.520 node1 1048576kB 0 / 0 00:05:32.520 node1 2048kB 0 / 0 00:05:32.520 00:05:32.520 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:32.520 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:32.520 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:32.520 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:32.520 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:32.520 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:32.520 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:32.520 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:32.520 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:32.520 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:32.520 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:32.520 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:32.520 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:32.520 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:32.520 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:32.520 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:32.520 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:32.520 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:32.520 09:15:16 -- spdk/autotest.sh@130 -- # uname -s 00:05:32.520 09:15:16 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:32.520 09:15:16 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:32.520 09:15:16 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:33.897 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:33.897 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:33.897 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:33.897 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:33.897 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:33.897 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:33.897 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:33.897 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:33.897 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:33.897 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:33.897 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:33.897 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:33.897 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:33.897 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:33.897 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:33.897 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:34.836 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:34.836 09:15:19 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:35.775 09:15:20 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:35.775 09:15:20 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:35.775 09:15:20 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:35.775 09:15:20 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:35.775 09:15:20 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:35.775 09:15:20 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:35.775 09:15:20 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:35.775 09:15:20 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:35.775 09:15:20 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:35.775 09:15:20 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:35.775 09:15:20 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:05:35.775 09:15:20 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:37.152 Waiting for block devices as requested 00:05:37.152 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:37.152 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:37.411 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:37.411 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:37.411 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:37.411 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:37.669 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:37.669 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:37.669 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:37.669 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:37.927 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:37.927 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:37.927 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:37.927 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:38.187 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:38.187 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:38.187 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:38.445 09:15:22 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:38.445 09:15:22 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:38.445 09:15:22 -- common/autotest_common.sh@1502 -- # grep 0000:88:00.0/nvme/nvme 00:05:38.445 09:15:22 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:38.445 09:15:22 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:38.445 09:15:22 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:38.445 09:15:22 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:38.445 09:15:22 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:38.445 09:15:22 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:38.445 09:15:22 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:38.445 09:15:22 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:38.445 09:15:22 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:38.445 09:15:22 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:38.445 09:15:22 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:05:38.445 09:15:22 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:38.445 09:15:22 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:38.445 09:15:22 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:38.445 09:15:22 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:38.445 09:15:22 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:38.445 09:15:22 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:38.445 09:15:22 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:38.445 09:15:22 -- common/autotest_common.sh@1557 -- # continue 00:05:38.445 09:15:22 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:38.445 09:15:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:38.445 09:15:22 -- common/autotest_common.sh@10 -- # set +x 00:05:38.445 09:15:22 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:38.445 09:15:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:38.445 09:15:22 -- common/autotest_common.sh@10 -- # set +x 00:05:38.445 09:15:22 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:39.386 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:39.386 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:39.696 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:39.696 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:39.696 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:39.696 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:39.696 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:39.696 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:39.696 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:39.696 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:39.696 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:39.696 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:39.696 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:39.696 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:39.696 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:39.696 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:40.634 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:40.634 09:15:25 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:40.634 09:15:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:40.634 09:15:25 -- common/autotest_common.sh@10 -- # set +x 00:05:40.634 09:15:25 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:40.634 09:15:25 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:40.634 09:15:25 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:40.634 09:15:25 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:40.634 09:15:25 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:40.634 09:15:25 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:40.634 09:15:25 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:40.634 09:15:25 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:40.634 09:15:25 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:40.634 09:15:25 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:40.634 09:15:25 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:40.893 09:15:25 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:40.893 09:15:25 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:05:40.893 09:15:25 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:40.893 09:15:25 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:40.893 09:15:25 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:40.893 09:15:25 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:40.893 09:15:25 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:40.893 09:15:25 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:88:00.0 00:05:40.893 09:15:25 -- common/autotest_common.sh@1592 -- # [[ -z 0000:88:00.0 ]] 00:05:40.893 09:15:25 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=605086 00:05:40.893 09:15:25 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:40.893 09:15:25 -- common/autotest_common.sh@1598 -- # waitforlisten 605086 00:05:40.893 09:15:25 -- common/autotest_common.sh@829 -- # '[' -z 605086 ']' 00:05:40.893 09:15:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.893 09:15:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.893 09:15:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.893 09:15:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.893 09:15:25 -- common/autotest_common.sh@10 -- # set +x 00:05:40.893 [2024-07-14 09:15:25.190506] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:05:40.893 [2024-07-14 09:15:25.190609] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid605086 ] 00:05:40.893 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.893 [2024-07-14 09:15:25.248658] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.893 [2024-07-14 09:15:25.336796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.152 09:15:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.152 09:15:25 -- common/autotest_common.sh@862 -- # return 0 00:05:41.152 09:15:25 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:41.152 09:15:25 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:41.152 09:15:25 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:44.438 nvme0n1 00:05:44.438 09:15:28 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:44.438 [2024-07-14 09:15:28.890789] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:44.438 [2024-07-14 09:15:28.890834] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:44.697 request: 00:05:44.697 { 00:05:44.697 "nvme_ctrlr_name": "nvme0", 00:05:44.697 "password": "test", 00:05:44.697 "method": "bdev_nvme_opal_revert", 00:05:44.697 "req_id": 1 00:05:44.697 } 00:05:44.697 Got JSON-RPC error response 00:05:44.697 response: 00:05:44.697 { 00:05:44.697 "code": -32603, 00:05:44.697 "message": "Internal error" 00:05:44.697 } 00:05:44.697 09:15:28 -- common/autotest_common.sh@1604 -- # true 00:05:44.697 09:15:28 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:44.697 09:15:28 -- common/autotest_common.sh@1608 -- # killprocess 605086 00:05:44.697 09:15:28 -- common/autotest_common.sh@948 -- # '[' -z 605086 ']' 00:05:44.697 09:15:28 -- common/autotest_common.sh@952 -- # kill -0 605086 00:05:44.697 09:15:28 -- common/autotest_common.sh@953 -- # uname 00:05:44.697 09:15:28 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:44.697 09:15:28 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 605086 00:05:44.697 09:15:28 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:44.697 09:15:28 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:44.697 09:15:28 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 605086' 00:05:44.697 killing process with pid 605086 00:05:44.697 09:15:28 -- common/autotest_common.sh@967 -- # kill 605086 00:05:44.697 09:15:28 -- common/autotest_common.sh@972 -- # wait 605086 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.697 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:44.698 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:46.598 09:15:30 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:46.598 09:15:30 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:46.598 09:15:30 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:46.598 09:15:30 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:46.598 09:15:30 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:46.598 09:15:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:46.599 09:15:30 -- common/autotest_common.sh@10 -- # set +x 00:05:46.599 09:15:30 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:46.599 09:15:30 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:46.599 09:15:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.599 09:15:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.599 09:15:30 -- common/autotest_common.sh@10 -- # set +x 00:05:46.599 ************************************ 00:05:46.599 START TEST env 00:05:46.599 ************************************ 00:05:46.599 09:15:30 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:46.599 * Looking for test storage... 00:05:46.599 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:46.599 09:15:30 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:46.599 09:15:30 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.599 09:15:30 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.599 09:15:30 env -- common/autotest_common.sh@10 -- # set +x 00:05:46.599 ************************************ 00:05:46.599 START TEST env_memory 00:05:46.599 ************************************ 00:05:46.599 09:15:30 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:46.599 00:05:46.599 00:05:46.599 CUnit - A unit testing framework for C - Version 2.1-3 00:05:46.599 http://cunit.sourceforge.net/ 00:05:46.599 00:05:46.599 00:05:46.599 Suite: memory 00:05:46.599 Test: alloc and free memory map ...[2024-07-14 09:15:30.849168] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:46.599 passed 00:05:46.599 Test: mem map translation ...[2024-07-14 09:15:30.869066] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:46.599 [2024-07-14 09:15:30.869089] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:46.599 [2024-07-14 09:15:30.869139] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:46.599 [2024-07-14 09:15:30.869151] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:46.599 passed 00:05:46.599 Test: mem map registration ...[2024-07-14 09:15:30.909433] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:46.599 [2024-07-14 09:15:30.909453] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:46.599 passed 00:05:46.599 Test: mem map adjacent registrations ...passed 00:05:46.599 00:05:46.599 Run Summary: Type Total Ran Passed Failed Inactive 00:05:46.599 suites 1 1 n/a 0 0 00:05:46.599 tests 4 4 4 0 0 00:05:46.599 asserts 152 152 152 0 n/a 00:05:46.599 00:05:46.599 Elapsed time = 0.140 seconds 00:05:46.599 00:05:46.599 real 0m0.148s 00:05:46.599 user 0m0.140s 00:05:46.599 sys 0m0.008s 00:05:46.599 09:15:30 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.599 09:15:30 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:46.599 ************************************ 00:05:46.599 END TEST env_memory 00:05:46.599 ************************************ 00:05:46.599 09:15:30 env -- common/autotest_common.sh@1142 -- # return 0 00:05:46.599 09:15:30 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:46.599 09:15:30 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.599 09:15:30 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.599 09:15:30 env -- common/autotest_common.sh@10 -- # set +x 00:05:46.599 ************************************ 00:05:46.599 START TEST env_vtophys 00:05:46.599 ************************************ 00:05:46.599 09:15:31 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:46.599 EAL: lib.eal log level changed from notice to debug 00:05:46.599 EAL: Detected lcore 0 as core 0 on socket 0 00:05:46.599 EAL: Detected lcore 1 as core 1 on socket 0 00:05:46.599 EAL: Detected lcore 2 as core 2 on socket 0 00:05:46.599 EAL: Detected lcore 3 as core 3 on socket 0 00:05:46.599 EAL: Detected lcore 4 as core 4 on socket 0 00:05:46.599 EAL: Detected lcore 5 as core 5 on socket 0 00:05:46.599 EAL: Detected lcore 6 as core 8 on socket 0 00:05:46.599 EAL: Detected lcore 7 as core 9 on socket 0 00:05:46.599 EAL: Detected lcore 8 as core 10 on socket 0 00:05:46.599 EAL: Detected lcore 9 as core 11 on socket 0 00:05:46.599 EAL: Detected lcore 10 as core 12 on socket 0 00:05:46.599 EAL: Detected lcore 11 as core 13 on socket 0 00:05:46.599 EAL: Detected lcore 12 as core 0 on socket 1 00:05:46.599 EAL: Detected lcore 13 as core 1 on socket 1 00:05:46.599 EAL: Detected lcore 14 as core 2 on socket 1 00:05:46.599 EAL: Detected lcore 15 as core 3 on socket 1 00:05:46.599 EAL: Detected lcore 16 as core 4 on socket 1 00:05:46.599 EAL: Detected lcore 17 as core 5 on socket 1 00:05:46.599 EAL: Detected lcore 18 as core 8 on socket 1 00:05:46.599 EAL: Detected lcore 19 as core 9 on socket 1 00:05:46.599 EAL: Detected lcore 20 as core 10 on socket 1 00:05:46.599 EAL: Detected lcore 21 as core 11 on socket 1 00:05:46.599 EAL: Detected lcore 22 as core 12 on socket 1 00:05:46.599 EAL: Detected lcore 23 as core 13 on socket 1 00:05:46.599 EAL: Detected lcore 24 as core 0 on socket 0 00:05:46.599 EAL: Detected lcore 25 as core 1 on socket 0 00:05:46.599 EAL: Detected lcore 26 as core 2 on socket 0 00:05:46.599 EAL: Detected lcore 27 as core 3 on socket 0 00:05:46.599 EAL: Detected lcore 28 as core 4 on socket 0 00:05:46.599 EAL: Detected lcore 29 as core 5 on socket 0 00:05:46.599 EAL: Detected lcore 30 as core 8 on socket 0 00:05:46.599 EAL: Detected lcore 31 as core 9 on socket 0 00:05:46.599 EAL: Detected lcore 32 as core 10 on socket 0 00:05:46.599 EAL: Detected lcore 33 as core 11 on socket 0 00:05:46.599 EAL: Detected lcore 34 as core 12 on socket 0 00:05:46.599 EAL: Detected lcore 35 as core 13 on socket 0 00:05:46.599 EAL: Detected lcore 36 as core 0 on socket 1 00:05:46.599 EAL: Detected lcore 37 as core 1 on socket 1 00:05:46.599 EAL: Detected lcore 38 as core 2 on socket 1 00:05:46.599 EAL: Detected lcore 39 as core 3 on socket 1 00:05:46.599 EAL: Detected lcore 40 as core 4 on socket 1 00:05:46.599 EAL: Detected lcore 41 as core 5 on socket 1 00:05:46.599 EAL: Detected lcore 42 as core 8 on socket 1 00:05:46.599 EAL: Detected lcore 43 as core 9 on socket 1 00:05:46.599 EAL: Detected lcore 44 as core 10 on socket 1 00:05:46.599 EAL: Detected lcore 45 as core 11 on socket 1 00:05:46.599 EAL: Detected lcore 46 as core 12 on socket 1 00:05:46.599 EAL: Detected lcore 47 as core 13 on socket 1 00:05:46.599 EAL: Maximum logical cores by configuration: 128 00:05:46.599 EAL: Detected CPU lcores: 48 00:05:46.599 EAL: Detected NUMA nodes: 2 00:05:46.599 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:46.599 EAL: Detected shared linkage of DPDK 00:05:46.599 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:46.599 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:46.599 EAL: Registered [vdev] bus. 00:05:46.599 EAL: bus.vdev log level changed from disabled to notice 00:05:46.599 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:46.599 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:46.599 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:46.599 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:46.599 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:46.599 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:46.599 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:46.599 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:46.599 EAL: No shared files mode enabled, IPC will be disabled 00:05:46.857 EAL: No shared files mode enabled, IPC is disabled 00:05:46.857 EAL: Bus pci wants IOVA as 'DC' 00:05:46.857 EAL: Bus vdev wants IOVA as 'DC' 00:05:46.857 EAL: Buses did not request a specific IOVA mode. 00:05:46.857 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:46.857 EAL: Selected IOVA mode 'VA' 00:05:46.857 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.857 EAL: Probing VFIO support... 00:05:46.857 EAL: IOMMU type 1 (Type 1) is supported 00:05:46.857 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:46.857 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:46.857 EAL: VFIO support initialized 00:05:46.857 EAL: Ask a virtual area of 0x2e000 bytes 00:05:46.857 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:46.857 EAL: Setting up physically contiguous memory... 00:05:46.857 EAL: Setting maximum number of open files to 524288 00:05:46.857 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:46.857 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:46.857 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:46.857 EAL: Ask a virtual area of 0x61000 bytes 00:05:46.857 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:46.857 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:46.857 EAL: Ask a virtual area of 0x400000000 bytes 00:05:46.857 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:46.857 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:46.857 EAL: Ask a virtual area of 0x61000 bytes 00:05:46.857 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:46.857 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:46.857 EAL: Ask a virtual area of 0x400000000 bytes 00:05:46.857 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:46.857 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:46.857 EAL: Ask a virtual area of 0x61000 bytes 00:05:46.857 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:46.857 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:46.857 EAL: Ask a virtual area of 0x400000000 bytes 00:05:46.857 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:46.857 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:46.857 EAL: Ask a virtual area of 0x61000 bytes 00:05:46.857 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:46.857 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:46.857 EAL: Ask a virtual area of 0x400000000 bytes 00:05:46.857 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:46.857 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:46.857 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:46.857 EAL: Ask a virtual area of 0x61000 bytes 00:05:46.857 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:46.857 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:46.857 EAL: Ask a virtual area of 0x400000000 bytes 00:05:46.857 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:46.857 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:46.857 EAL: Ask a virtual area of 0x61000 bytes 00:05:46.857 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:46.857 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:46.857 EAL: Ask a virtual area of 0x400000000 bytes 00:05:46.857 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:46.857 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:46.857 EAL: Ask a virtual area of 0x61000 bytes 00:05:46.857 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:46.857 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:46.857 EAL: Ask a virtual area of 0x400000000 bytes 00:05:46.857 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:46.857 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:46.857 EAL: Ask a virtual area of 0x61000 bytes 00:05:46.857 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:46.857 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:46.857 EAL: Ask a virtual area of 0x400000000 bytes 00:05:46.857 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:46.857 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:46.857 EAL: Hugepages will be freed exactly as allocated. 00:05:46.857 EAL: No shared files mode enabled, IPC is disabled 00:05:46.857 EAL: No shared files mode enabled, IPC is disabled 00:05:46.857 EAL: TSC frequency is ~2700000 KHz 00:05:46.857 EAL: Main lcore 0 is ready (tid=7f6c98781a00;cpuset=[0]) 00:05:46.857 EAL: Trying to obtain current memory policy. 00:05:46.857 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.857 EAL: Restoring previous memory policy: 0 00:05:46.857 EAL: request: mp_malloc_sync 00:05:46.857 EAL: No shared files mode enabled, IPC is disabled 00:05:46.857 EAL: Heap on socket 0 was expanded by 2MB 00:05:46.857 EAL: No shared files mode enabled, IPC is disabled 00:05:46.857 EAL: No shared files mode enabled, IPC is disabled 00:05:46.857 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:46.857 EAL: Mem event callback 'spdk:(nil)' registered 00:05:46.857 00:05:46.857 00:05:46.857 CUnit - A unit testing framework for C - Version 2.1-3 00:05:46.857 http://cunit.sourceforge.net/ 00:05:46.857 00:05:46.857 00:05:46.857 Suite: components_suite 00:05:46.857 Test: vtophys_malloc_test ...passed 00:05:46.857 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:46.857 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.857 EAL: Restoring previous memory policy: 4 00:05:46.857 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.857 EAL: request: mp_malloc_sync 00:05:46.857 EAL: No shared files mode enabled, IPC is disabled 00:05:46.858 EAL: Heap on socket 0 was expanded by 4MB 00:05:46.858 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.858 EAL: request: mp_malloc_sync 00:05:46.858 EAL: No shared files mode enabled, IPC is disabled 00:05:46.858 EAL: Heap on socket 0 was shrunk by 4MB 00:05:46.858 EAL: Trying to obtain current memory policy. 00:05:46.858 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.858 EAL: Restoring previous memory policy: 4 00:05:46.858 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.858 EAL: request: mp_malloc_sync 00:05:46.858 EAL: No shared files mode enabled, IPC is disabled 00:05:46.858 EAL: Heap on socket 0 was expanded by 6MB 00:05:46.858 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.858 EAL: request: mp_malloc_sync 00:05:46.858 EAL: No shared files mode enabled, IPC is disabled 00:05:46.858 EAL: Heap on socket 0 was shrunk by 6MB 00:05:46.858 EAL: Trying to obtain current memory policy. 00:05:46.858 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.858 EAL: Restoring previous memory policy: 4 00:05:46.858 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.858 EAL: request: mp_malloc_sync 00:05:46.858 EAL: No shared files mode enabled, IPC is disabled 00:05:46.858 EAL: Heap on socket 0 was expanded by 10MB 00:05:46.858 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.858 EAL: request: mp_malloc_sync 00:05:46.858 EAL: No shared files mode enabled, IPC is disabled 00:05:46.858 EAL: Heap on socket 0 was shrunk by 10MB 00:05:46.858 EAL: Trying to obtain current memory policy. 00:05:46.858 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.858 EAL: Restoring previous memory policy: 4 00:05:46.858 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.858 EAL: request: mp_malloc_sync 00:05:46.858 EAL: No shared files mode enabled, IPC is disabled 00:05:46.858 EAL: Heap on socket 0 was expanded by 18MB 00:05:46.858 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.858 EAL: request: mp_malloc_sync 00:05:46.858 EAL: No shared files mode enabled, IPC is disabled 00:05:46.858 EAL: Heap on socket 0 was shrunk by 18MB 00:05:46.858 EAL: Trying to obtain current memory policy. 00:05:46.858 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.858 EAL: Restoring previous memory policy: 4 00:05:46.858 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.858 EAL: request: mp_malloc_sync 00:05:46.858 EAL: No shared files mode enabled, IPC is disabled 00:05:46.858 EAL: Heap on socket 0 was expanded by 34MB 00:05:46.858 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.858 EAL: request: mp_malloc_sync 00:05:46.858 EAL: No shared files mode enabled, IPC is disabled 00:05:46.858 EAL: Heap on socket 0 was shrunk by 34MB 00:05:46.858 EAL: Trying to obtain current memory policy. 00:05:46.858 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.858 EAL: Restoring previous memory policy: 4 00:05:46.858 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.858 EAL: request: mp_malloc_sync 00:05:46.858 EAL: No shared files mode enabled, IPC is disabled 00:05:46.858 EAL: Heap on socket 0 was expanded by 66MB 00:05:46.858 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.858 EAL: request: mp_malloc_sync 00:05:46.858 EAL: No shared files mode enabled, IPC is disabled 00:05:46.858 EAL: Heap on socket 0 was shrunk by 66MB 00:05:46.858 EAL: Trying to obtain current memory policy. 00:05:46.858 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.858 EAL: Restoring previous memory policy: 4 00:05:46.858 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.858 EAL: request: mp_malloc_sync 00:05:46.858 EAL: No shared files mode enabled, IPC is disabled 00:05:46.858 EAL: Heap on socket 0 was expanded by 130MB 00:05:46.858 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.858 EAL: request: mp_malloc_sync 00:05:46.858 EAL: No shared files mode enabled, IPC is disabled 00:05:46.858 EAL: Heap on socket 0 was shrunk by 130MB 00:05:46.858 EAL: Trying to obtain current memory policy. 00:05:46.858 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.858 EAL: Restoring previous memory policy: 4 00:05:46.858 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.858 EAL: request: mp_malloc_sync 00:05:46.858 EAL: No shared files mode enabled, IPC is disabled 00:05:46.858 EAL: Heap on socket 0 was expanded by 258MB 00:05:47.115 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.115 EAL: request: mp_malloc_sync 00:05:47.115 EAL: No shared files mode enabled, IPC is disabled 00:05:47.115 EAL: Heap on socket 0 was shrunk by 258MB 00:05:47.115 EAL: Trying to obtain current memory policy. 00:05:47.115 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.115 EAL: Restoring previous memory policy: 4 00:05:47.115 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.115 EAL: request: mp_malloc_sync 00:05:47.115 EAL: No shared files mode enabled, IPC is disabled 00:05:47.115 EAL: Heap on socket 0 was expanded by 514MB 00:05:47.372 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.372 EAL: request: mp_malloc_sync 00:05:47.372 EAL: No shared files mode enabled, IPC is disabled 00:05:47.372 EAL: Heap on socket 0 was shrunk by 514MB 00:05:47.372 EAL: Trying to obtain current memory policy. 00:05:47.372 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.629 EAL: Restoring previous memory policy: 4 00:05:47.629 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.629 EAL: request: mp_malloc_sync 00:05:47.629 EAL: No shared files mode enabled, IPC is disabled 00:05:47.629 EAL: Heap on socket 0 was expanded by 1026MB 00:05:47.886 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.145 EAL: request: mp_malloc_sync 00:05:48.145 EAL: No shared files mode enabled, IPC is disabled 00:05:48.145 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:48.145 passed 00:05:48.145 00:05:48.145 Run Summary: Type Total Ran Passed Failed Inactive 00:05:48.145 suites 1 1 n/a 0 0 00:05:48.145 tests 2 2 2 0 0 00:05:48.145 asserts 497 497 497 0 n/a 00:05:48.145 00:05:48.145 Elapsed time = 1.367 seconds 00:05:48.145 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.145 EAL: request: mp_malloc_sync 00:05:48.145 EAL: No shared files mode enabled, IPC is disabled 00:05:48.145 EAL: Heap on socket 0 was shrunk by 2MB 00:05:48.145 EAL: No shared files mode enabled, IPC is disabled 00:05:48.145 EAL: No shared files mode enabled, IPC is disabled 00:05:48.145 EAL: No shared files mode enabled, IPC is disabled 00:05:48.145 00:05:48.145 real 0m1.479s 00:05:48.145 user 0m0.843s 00:05:48.145 sys 0m0.608s 00:05:48.145 09:15:32 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.145 09:15:32 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:48.145 ************************************ 00:05:48.145 END TEST env_vtophys 00:05:48.145 ************************************ 00:05:48.145 09:15:32 env -- common/autotest_common.sh@1142 -- # return 0 00:05:48.145 09:15:32 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:48.145 09:15:32 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:48.145 09:15:32 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.145 09:15:32 env -- common/autotest_common.sh@10 -- # set +x 00:05:48.145 ************************************ 00:05:48.145 START TEST env_pci 00:05:48.145 ************************************ 00:05:48.146 09:15:32 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:48.146 00:05:48.146 00:05:48.146 CUnit - A unit testing framework for C - Version 2.1-3 00:05:48.146 http://cunit.sourceforge.net/ 00:05:48.146 00:05:48.146 00:05:48.146 Suite: pci 00:05:48.146 Test: pci_hook ...[2024-07-14 09:15:32.548283] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 605979 has claimed it 00:05:48.146 EAL: Cannot find device (10000:00:01.0) 00:05:48.146 EAL: Failed to attach device on primary process 00:05:48.146 passed 00:05:48.146 00:05:48.146 Run Summary: Type Total Ran Passed Failed Inactive 00:05:48.146 suites 1 1 n/a 0 0 00:05:48.146 tests 1 1 1 0 0 00:05:48.146 asserts 25 25 25 0 n/a 00:05:48.146 00:05:48.146 Elapsed time = 0.023 seconds 00:05:48.146 00:05:48.146 real 0m0.035s 00:05:48.146 user 0m0.008s 00:05:48.146 sys 0m0.027s 00:05:48.146 09:15:32 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.146 09:15:32 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:48.146 ************************************ 00:05:48.146 END TEST env_pci 00:05:48.146 ************************************ 00:05:48.146 09:15:32 env -- common/autotest_common.sh@1142 -- # return 0 00:05:48.146 09:15:32 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:48.146 09:15:32 env -- env/env.sh@15 -- # uname 00:05:48.146 09:15:32 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:48.146 09:15:32 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:48.146 09:15:32 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:48.146 09:15:32 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:48.146 09:15:32 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.403 09:15:32 env -- common/autotest_common.sh@10 -- # set +x 00:05:48.403 ************************************ 00:05:48.403 START TEST env_dpdk_post_init 00:05:48.403 ************************************ 00:05:48.403 09:15:32 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:48.403 EAL: Detected CPU lcores: 48 00:05:48.403 EAL: Detected NUMA nodes: 2 00:05:48.403 EAL: Detected shared linkage of DPDK 00:05:48.403 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:48.403 EAL: Selected IOVA mode 'VA' 00:05:48.403 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.403 EAL: VFIO support initialized 00:05:48.403 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:48.403 EAL: Using IOMMU type 1 (Type 1) 00:05:48.403 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:48.403 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:48.403 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:48.403 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:48.403 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:48.403 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:48.403 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:48.403 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:48.403 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:48.403 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:48.403 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:48.661 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:48.661 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:48.661 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:48.661 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:48.661 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:49.225 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:05:52.502 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:05:52.502 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:05:52.761 Starting DPDK initialization... 00:05:52.761 Starting SPDK post initialization... 00:05:52.761 SPDK NVMe probe 00:05:52.761 Attaching to 0000:88:00.0 00:05:52.761 Attached to 0000:88:00.0 00:05:52.761 Cleaning up... 00:05:52.761 00:05:52.761 real 0m4.386s 00:05:52.761 user 0m3.261s 00:05:52.761 sys 0m0.188s 00:05:52.761 09:15:37 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.761 09:15:37 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:52.761 ************************************ 00:05:52.761 END TEST env_dpdk_post_init 00:05:52.761 ************************************ 00:05:52.761 09:15:37 env -- common/autotest_common.sh@1142 -- # return 0 00:05:52.761 09:15:37 env -- env/env.sh@26 -- # uname 00:05:52.761 09:15:37 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:52.761 09:15:37 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:52.761 09:15:37 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:52.761 09:15:37 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.761 09:15:37 env -- common/autotest_common.sh@10 -- # set +x 00:05:52.761 ************************************ 00:05:52.761 START TEST env_mem_callbacks 00:05:52.761 ************************************ 00:05:52.761 09:15:37 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:52.761 EAL: Detected CPU lcores: 48 00:05:52.761 EAL: Detected NUMA nodes: 2 00:05:52.761 EAL: Detected shared linkage of DPDK 00:05:52.761 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:52.761 EAL: Selected IOVA mode 'VA' 00:05:52.761 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.761 EAL: VFIO support initialized 00:05:52.761 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:52.761 00:05:52.761 00:05:52.761 CUnit - A unit testing framework for C - Version 2.1-3 00:05:52.761 http://cunit.sourceforge.net/ 00:05:52.761 00:05:52.761 00:05:52.761 Suite: memory 00:05:52.761 Test: test ... 00:05:52.761 register 0x200000200000 2097152 00:05:52.761 malloc 3145728 00:05:52.761 register 0x200000400000 4194304 00:05:52.761 buf 0x200000500000 len 3145728 PASSED 00:05:52.761 malloc 64 00:05:52.761 buf 0x2000004fff40 len 64 PASSED 00:05:52.761 malloc 4194304 00:05:52.761 register 0x200000800000 6291456 00:05:52.761 buf 0x200000a00000 len 4194304 PASSED 00:05:52.761 free 0x200000500000 3145728 00:05:52.761 free 0x2000004fff40 64 00:05:52.761 unregister 0x200000400000 4194304 PASSED 00:05:52.761 free 0x200000a00000 4194304 00:05:52.761 unregister 0x200000800000 6291456 PASSED 00:05:52.762 malloc 8388608 00:05:52.762 register 0x200000400000 10485760 00:05:52.762 buf 0x200000600000 len 8388608 PASSED 00:05:52.762 free 0x200000600000 8388608 00:05:52.762 unregister 0x200000400000 10485760 PASSED 00:05:52.762 passed 00:05:52.762 00:05:52.762 Run Summary: Type Total Ran Passed Failed Inactive 00:05:52.762 suites 1 1 n/a 0 0 00:05:52.762 tests 1 1 1 0 0 00:05:52.762 asserts 15 15 15 0 n/a 00:05:52.762 00:05:52.762 Elapsed time = 0.005 seconds 00:05:52.762 00:05:52.762 real 0m0.048s 00:05:52.762 user 0m0.014s 00:05:52.762 sys 0m0.034s 00:05:52.762 09:15:37 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.762 09:15:37 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:52.762 ************************************ 00:05:52.762 END TEST env_mem_callbacks 00:05:52.762 ************************************ 00:05:52.762 09:15:37 env -- common/autotest_common.sh@1142 -- # return 0 00:05:52.762 00:05:52.762 real 0m6.390s 00:05:52.762 user 0m4.381s 00:05:52.762 sys 0m1.062s 00:05:52.762 09:15:37 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.762 09:15:37 env -- common/autotest_common.sh@10 -- # set +x 00:05:52.762 ************************************ 00:05:52.762 END TEST env 00:05:52.762 ************************************ 00:05:52.762 09:15:37 -- common/autotest_common.sh@1142 -- # return 0 00:05:52.762 09:15:37 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:52.762 09:15:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:52.762 09:15:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.762 09:15:37 -- common/autotest_common.sh@10 -- # set +x 00:05:52.762 ************************************ 00:05:52.762 START TEST rpc 00:05:52.762 ************************************ 00:05:52.762 09:15:37 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:53.021 * Looking for test storage... 00:05:53.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:53.021 09:15:37 rpc -- rpc/rpc.sh@65 -- # spdk_pid=606630 00:05:53.021 09:15:37 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:53.021 09:15:37 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:53.021 09:15:37 rpc -- rpc/rpc.sh@67 -- # waitforlisten 606630 00:05:53.021 09:15:37 rpc -- common/autotest_common.sh@829 -- # '[' -z 606630 ']' 00:05:53.021 09:15:37 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.021 09:15:37 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.021 09:15:37 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.021 09:15:37 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.021 09:15:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.021 [2024-07-14 09:15:37.281454] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:05:53.021 [2024-07-14 09:15:37.281559] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid606630 ] 00:05:53.021 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.021 [2024-07-14 09:15:37.345974] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.021 [2024-07-14 09:15:37.440436] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:53.021 [2024-07-14 09:15:37.440510] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 606630' to capture a snapshot of events at runtime. 00:05:53.021 [2024-07-14 09:15:37.440527] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:53.021 [2024-07-14 09:15:37.440540] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:53.021 [2024-07-14 09:15:37.440551] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid606630 for offline analysis/debug. 00:05:53.021 [2024-07-14 09:15:37.440590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.279 09:15:37 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.279 09:15:37 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:53.279 09:15:37 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:53.279 09:15:37 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:53.279 09:15:37 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:53.279 09:15:37 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:53.279 09:15:37 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:53.279 09:15:37 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.279 09:15:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.279 ************************************ 00:05:53.279 START TEST rpc_integrity 00:05:53.279 ************************************ 00:05:53.279 09:15:37 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:53.279 09:15:37 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:53.279 09:15:37 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.279 09:15:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:53.279 09:15:37 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:53.279 09:15:37 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:53.279 09:15:37 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:53.537 09:15:37 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:53.537 09:15:37 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:53.537 09:15:37 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.537 09:15:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:53.537 09:15:37 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:53.537 09:15:37 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:53.537 09:15:37 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:53.537 09:15:37 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.537 09:15:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:53.537 09:15:37 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:53.537 09:15:37 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:53.537 { 00:05:53.537 "name": "Malloc0", 00:05:53.537 "aliases": [ 00:05:53.537 "961947a3-d65f-458f-9b45-92dbdd19110e" 00:05:53.537 ], 00:05:53.537 "product_name": "Malloc disk", 00:05:53.537 "block_size": 512, 00:05:53.537 "num_blocks": 16384, 00:05:53.537 "uuid": "961947a3-d65f-458f-9b45-92dbdd19110e", 00:05:53.537 "assigned_rate_limits": { 00:05:53.537 "rw_ios_per_sec": 0, 00:05:53.537 "rw_mbytes_per_sec": 0, 00:05:53.537 "r_mbytes_per_sec": 0, 00:05:53.537 "w_mbytes_per_sec": 0 00:05:53.537 }, 00:05:53.537 "claimed": false, 00:05:53.537 "zoned": false, 00:05:53.537 "supported_io_types": { 00:05:53.537 "read": true, 00:05:53.537 "write": true, 00:05:53.537 "unmap": true, 00:05:53.537 "flush": true, 00:05:53.537 "reset": true, 00:05:53.537 "nvme_admin": false, 00:05:53.537 "nvme_io": false, 00:05:53.537 "nvme_io_md": false, 00:05:53.537 "write_zeroes": true, 00:05:53.537 "zcopy": true, 00:05:53.537 "get_zone_info": false, 00:05:53.537 "zone_management": false, 00:05:53.537 "zone_append": false, 00:05:53.537 "compare": false, 00:05:53.537 "compare_and_write": false, 00:05:53.537 "abort": true, 00:05:53.537 "seek_hole": false, 00:05:53.537 "seek_data": false, 00:05:53.537 "copy": true, 00:05:53.537 "nvme_iov_md": false 00:05:53.537 }, 00:05:53.537 "memory_domains": [ 00:05:53.537 { 00:05:53.537 "dma_device_id": "system", 00:05:53.537 "dma_device_type": 1 00:05:53.537 }, 00:05:53.537 { 00:05:53.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:53.537 "dma_device_type": 2 00:05:53.537 } 00:05:53.537 ], 00:05:53.537 "driver_specific": {} 00:05:53.537 } 00:05:53.537 ]' 00:05:53.537 09:15:37 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:53.537 09:15:37 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:53.537 09:15:37 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:53.537 09:15:37 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.537 09:15:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:53.537 [2024-07-14 09:15:37.831284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:53.537 [2024-07-14 09:15:37.831330] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:53.537 [2024-07-14 09:15:37.831362] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1dbdaf0 00:05:53.537 [2024-07-14 09:15:37.831379] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:53.537 [2024-07-14 09:15:37.832884] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:53.537 [2024-07-14 09:15:37.832927] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:53.537 Passthru0 00:05:53.537 09:15:37 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:53.537 09:15:37 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:53.537 09:15:37 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.537 09:15:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:53.537 09:15:37 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:53.537 09:15:37 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:53.537 { 00:05:53.537 "name": "Malloc0", 00:05:53.537 "aliases": [ 00:05:53.537 "961947a3-d65f-458f-9b45-92dbdd19110e" 00:05:53.537 ], 00:05:53.537 "product_name": "Malloc disk", 00:05:53.537 "block_size": 512, 00:05:53.537 "num_blocks": 16384, 00:05:53.537 "uuid": "961947a3-d65f-458f-9b45-92dbdd19110e", 00:05:53.537 "assigned_rate_limits": { 00:05:53.537 "rw_ios_per_sec": 0, 00:05:53.537 "rw_mbytes_per_sec": 0, 00:05:53.537 "r_mbytes_per_sec": 0, 00:05:53.537 "w_mbytes_per_sec": 0 00:05:53.537 }, 00:05:53.538 "claimed": true, 00:05:53.538 "claim_type": "exclusive_write", 00:05:53.538 "zoned": false, 00:05:53.538 "supported_io_types": { 00:05:53.538 "read": true, 00:05:53.538 "write": true, 00:05:53.538 "unmap": true, 00:05:53.538 "flush": true, 00:05:53.538 "reset": true, 00:05:53.538 "nvme_admin": false, 00:05:53.538 "nvme_io": false, 00:05:53.538 "nvme_io_md": false, 00:05:53.538 "write_zeroes": true, 00:05:53.538 "zcopy": true, 00:05:53.538 "get_zone_info": false, 00:05:53.538 "zone_management": false, 00:05:53.538 "zone_append": false, 00:05:53.538 "compare": false, 00:05:53.538 "compare_and_write": false, 00:05:53.538 "abort": true, 00:05:53.538 "seek_hole": false, 00:05:53.538 "seek_data": false, 00:05:53.538 "copy": true, 00:05:53.538 "nvme_iov_md": false 00:05:53.538 }, 00:05:53.538 "memory_domains": [ 00:05:53.538 { 00:05:53.538 "dma_device_id": "system", 00:05:53.538 "dma_device_type": 1 00:05:53.538 }, 00:05:53.538 { 00:05:53.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:53.538 "dma_device_type": 2 00:05:53.538 } 00:05:53.538 ], 00:05:53.538 "driver_specific": {} 00:05:53.538 }, 00:05:53.538 { 00:05:53.538 "name": "Passthru0", 00:05:53.538 "aliases": [ 00:05:53.538 "5bd85be8-8eeb-5b07-9239-1775ea20d73e" 00:05:53.538 ], 00:05:53.538 "product_name": "passthru", 00:05:53.538 "block_size": 512, 00:05:53.538 "num_blocks": 16384, 00:05:53.538 "uuid": "5bd85be8-8eeb-5b07-9239-1775ea20d73e", 00:05:53.538 "assigned_rate_limits": { 00:05:53.538 "rw_ios_per_sec": 0, 00:05:53.538 "rw_mbytes_per_sec": 0, 00:05:53.538 "r_mbytes_per_sec": 0, 00:05:53.538 "w_mbytes_per_sec": 0 00:05:53.538 }, 00:05:53.538 "claimed": false, 00:05:53.538 "zoned": false, 00:05:53.538 "supported_io_types": { 00:05:53.538 "read": true, 00:05:53.538 "write": true, 00:05:53.538 "unmap": true, 00:05:53.538 "flush": true, 00:05:53.538 "reset": true, 00:05:53.538 "nvme_admin": false, 00:05:53.538 "nvme_io": false, 00:05:53.538 "nvme_io_md": false, 00:05:53.538 "write_zeroes": true, 00:05:53.538 "zcopy": true, 00:05:53.538 "get_zone_info": false, 00:05:53.538 "zone_management": false, 00:05:53.538 "zone_append": false, 00:05:53.538 "compare": false, 00:05:53.538 "compare_and_write": false, 00:05:53.538 "abort": true, 00:05:53.538 "seek_hole": false, 00:05:53.538 "seek_data": false, 00:05:53.538 "copy": true, 00:05:53.538 "nvme_iov_md": false 00:05:53.538 }, 00:05:53.538 "memory_domains": [ 00:05:53.538 { 00:05:53.538 "dma_device_id": "system", 00:05:53.538 "dma_device_type": 1 00:05:53.538 }, 00:05:53.538 { 00:05:53.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:53.538 "dma_device_type": 2 00:05:53.538 } 00:05:53.538 ], 00:05:53.538 "driver_specific": { 00:05:53.538 "passthru": { 00:05:53.538 "name": "Passthru0", 00:05:53.538 "base_bdev_name": "Malloc0" 00:05:53.538 } 00:05:53.538 } 00:05:53.538 } 00:05:53.538 ]' 00:05:53.538 09:15:37 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:53.538 09:15:37 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:53.538 09:15:37 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:53.538 09:15:37 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.538 09:15:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:53.538 09:15:37 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:53.538 09:15:37 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:53.538 09:15:37 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.538 09:15:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:53.538 09:15:37 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:53.538 09:15:37 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:53.538 09:15:37 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.538 09:15:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:53.538 09:15:37 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:53.538 09:15:37 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:53.538 09:15:37 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:53.538 09:15:37 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:53.538 00:05:53.538 real 0m0.237s 00:05:53.538 user 0m0.150s 00:05:53.538 sys 0m0.025s 00:05:53.538 09:15:37 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.538 09:15:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:53.538 ************************************ 00:05:53.538 END TEST rpc_integrity 00:05:53.538 ************************************ 00:05:53.538 09:15:37 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:53.538 09:15:37 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:53.538 09:15:37 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:53.538 09:15:37 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.538 09:15:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.796 ************************************ 00:05:53.796 START TEST rpc_plugins 00:05:53.796 ************************************ 00:05:53.796 09:15:38 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:53.796 09:15:38 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:53.796 09:15:38 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.796 09:15:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:53.796 09:15:38 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:53.796 09:15:38 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:53.796 09:15:38 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:53.796 09:15:38 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.796 09:15:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:53.796 09:15:38 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:53.796 09:15:38 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:53.796 { 00:05:53.796 "name": "Malloc1", 00:05:53.796 "aliases": [ 00:05:53.796 "0b08a101-1c93-49c5-ab74-787aa7d826a8" 00:05:53.796 ], 00:05:53.796 "product_name": "Malloc disk", 00:05:53.796 "block_size": 4096, 00:05:53.796 "num_blocks": 256, 00:05:53.796 "uuid": "0b08a101-1c93-49c5-ab74-787aa7d826a8", 00:05:53.796 "assigned_rate_limits": { 00:05:53.796 "rw_ios_per_sec": 0, 00:05:53.796 "rw_mbytes_per_sec": 0, 00:05:53.796 "r_mbytes_per_sec": 0, 00:05:53.796 "w_mbytes_per_sec": 0 00:05:53.796 }, 00:05:53.796 "claimed": false, 00:05:53.796 "zoned": false, 00:05:53.796 "supported_io_types": { 00:05:53.796 "read": true, 00:05:53.796 "write": true, 00:05:53.796 "unmap": true, 00:05:53.796 "flush": true, 00:05:53.796 "reset": true, 00:05:53.796 "nvme_admin": false, 00:05:53.796 "nvme_io": false, 00:05:53.796 "nvme_io_md": false, 00:05:53.796 "write_zeroes": true, 00:05:53.796 "zcopy": true, 00:05:53.796 "get_zone_info": false, 00:05:53.796 "zone_management": false, 00:05:53.796 "zone_append": false, 00:05:53.796 "compare": false, 00:05:53.796 "compare_and_write": false, 00:05:53.796 "abort": true, 00:05:53.796 "seek_hole": false, 00:05:53.796 "seek_data": false, 00:05:53.796 "copy": true, 00:05:53.796 "nvme_iov_md": false 00:05:53.796 }, 00:05:53.796 "memory_domains": [ 00:05:53.796 { 00:05:53.796 "dma_device_id": "system", 00:05:53.796 "dma_device_type": 1 00:05:53.796 }, 00:05:53.796 { 00:05:53.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:53.796 "dma_device_type": 2 00:05:53.796 } 00:05:53.796 ], 00:05:53.796 "driver_specific": {} 00:05:53.796 } 00:05:53.796 ]' 00:05:53.796 09:15:38 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:53.796 09:15:38 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:53.796 09:15:38 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:53.796 09:15:38 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.796 09:15:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:53.796 09:15:38 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:53.796 09:15:38 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:53.796 09:15:38 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.796 09:15:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:53.796 09:15:38 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:53.796 09:15:38 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:53.796 09:15:38 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:53.796 09:15:38 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:53.796 00:05:53.796 real 0m0.112s 00:05:53.796 user 0m0.073s 00:05:53.796 sys 0m0.011s 00:05:53.796 09:15:38 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.796 09:15:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:53.796 ************************************ 00:05:53.796 END TEST rpc_plugins 00:05:53.796 ************************************ 00:05:53.796 09:15:38 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:53.796 09:15:38 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:53.796 09:15:38 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:53.796 09:15:38 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.796 09:15:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.796 ************************************ 00:05:53.796 START TEST rpc_trace_cmd_test 00:05:53.796 ************************************ 00:05:53.796 09:15:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:53.796 09:15:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:53.796 09:15:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:53.796 09:15:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.796 09:15:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:53.796 09:15:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:53.796 09:15:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:53.796 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid606630", 00:05:53.796 "tpoint_group_mask": "0x8", 00:05:53.796 "iscsi_conn": { 00:05:53.796 "mask": "0x2", 00:05:53.796 "tpoint_mask": "0x0" 00:05:53.796 }, 00:05:53.796 "scsi": { 00:05:53.796 "mask": "0x4", 00:05:53.796 "tpoint_mask": "0x0" 00:05:53.796 }, 00:05:53.796 "bdev": { 00:05:53.796 "mask": "0x8", 00:05:53.796 "tpoint_mask": "0xffffffffffffffff" 00:05:53.796 }, 00:05:53.796 "nvmf_rdma": { 00:05:53.796 "mask": "0x10", 00:05:53.796 "tpoint_mask": "0x0" 00:05:53.796 }, 00:05:53.796 "nvmf_tcp": { 00:05:53.796 "mask": "0x20", 00:05:53.796 "tpoint_mask": "0x0" 00:05:53.796 }, 00:05:53.796 "ftl": { 00:05:53.796 "mask": "0x40", 00:05:53.796 "tpoint_mask": "0x0" 00:05:53.796 }, 00:05:53.796 "blobfs": { 00:05:53.796 "mask": "0x80", 00:05:53.796 "tpoint_mask": "0x0" 00:05:53.796 }, 00:05:53.796 "dsa": { 00:05:53.796 "mask": "0x200", 00:05:53.796 "tpoint_mask": "0x0" 00:05:53.796 }, 00:05:53.796 "thread": { 00:05:53.797 "mask": "0x400", 00:05:53.797 "tpoint_mask": "0x0" 00:05:53.797 }, 00:05:53.797 "nvme_pcie": { 00:05:53.797 "mask": "0x800", 00:05:53.797 "tpoint_mask": "0x0" 00:05:53.797 }, 00:05:53.797 "iaa": { 00:05:53.797 "mask": "0x1000", 00:05:53.797 "tpoint_mask": "0x0" 00:05:53.797 }, 00:05:53.797 "nvme_tcp": { 00:05:53.797 "mask": "0x2000", 00:05:53.797 "tpoint_mask": "0x0" 00:05:53.797 }, 00:05:53.797 "bdev_nvme": { 00:05:53.797 "mask": "0x4000", 00:05:53.797 "tpoint_mask": "0x0" 00:05:53.797 }, 00:05:53.797 "sock": { 00:05:53.797 "mask": "0x8000", 00:05:53.797 "tpoint_mask": "0x0" 00:05:53.797 } 00:05:53.797 }' 00:05:53.797 09:15:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:53.797 09:15:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:53.797 09:15:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:53.797 09:15:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:53.797 09:15:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:54.055 09:15:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:54.055 09:15:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:54.055 09:15:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:54.055 09:15:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:54.055 09:15:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:54.055 00:05:54.055 real 0m0.197s 00:05:54.055 user 0m0.173s 00:05:54.055 sys 0m0.016s 00:05:54.055 09:15:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.055 09:15:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:54.055 ************************************ 00:05:54.055 END TEST rpc_trace_cmd_test 00:05:54.055 ************************************ 00:05:54.055 09:15:38 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:54.055 09:15:38 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:54.055 09:15:38 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:54.055 09:15:38 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:54.055 09:15:38 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:54.055 09:15:38 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.055 09:15:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.055 ************************************ 00:05:54.055 START TEST rpc_daemon_integrity 00:05:54.055 ************************************ 00:05:54.055 09:15:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:54.055 09:15:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:54.055 09:15:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.055 09:15:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.055 09:15:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.055 09:15:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:54.055 09:15:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:54.055 09:15:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:54.055 09:15:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:54.055 09:15:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.055 09:15:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.055 09:15:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.055 09:15:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:54.055 09:15:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:54.055 09:15:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.055 09:15:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.055 09:15:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.055 09:15:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:54.055 { 00:05:54.055 "name": "Malloc2", 00:05:54.055 "aliases": [ 00:05:54.055 "ef512cbd-11d8-44be-a1bf-16803480eda1" 00:05:54.055 ], 00:05:54.055 "product_name": "Malloc disk", 00:05:54.055 "block_size": 512, 00:05:54.055 "num_blocks": 16384, 00:05:54.055 "uuid": "ef512cbd-11d8-44be-a1bf-16803480eda1", 00:05:54.056 "assigned_rate_limits": { 00:05:54.056 "rw_ios_per_sec": 0, 00:05:54.056 "rw_mbytes_per_sec": 0, 00:05:54.056 "r_mbytes_per_sec": 0, 00:05:54.056 "w_mbytes_per_sec": 0 00:05:54.056 }, 00:05:54.056 "claimed": false, 00:05:54.056 "zoned": false, 00:05:54.056 "supported_io_types": { 00:05:54.056 "read": true, 00:05:54.056 "write": true, 00:05:54.056 "unmap": true, 00:05:54.056 "flush": true, 00:05:54.056 "reset": true, 00:05:54.056 "nvme_admin": false, 00:05:54.056 "nvme_io": false, 00:05:54.056 "nvme_io_md": false, 00:05:54.056 "write_zeroes": true, 00:05:54.056 "zcopy": true, 00:05:54.056 "get_zone_info": false, 00:05:54.056 "zone_management": false, 00:05:54.056 "zone_append": false, 00:05:54.056 "compare": false, 00:05:54.056 "compare_and_write": false, 00:05:54.056 "abort": true, 00:05:54.056 "seek_hole": false, 00:05:54.056 "seek_data": false, 00:05:54.056 "copy": true, 00:05:54.056 "nvme_iov_md": false 00:05:54.056 }, 00:05:54.056 "memory_domains": [ 00:05:54.056 { 00:05:54.056 "dma_device_id": "system", 00:05:54.056 "dma_device_type": 1 00:05:54.056 }, 00:05:54.056 { 00:05:54.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.056 "dma_device_type": 2 00:05:54.056 } 00:05:54.056 ], 00:05:54.056 "driver_specific": {} 00:05:54.056 } 00:05:54.056 ]' 00:05:54.056 09:15:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:54.314 09:15:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:54.314 09:15:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:54.314 09:15:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.314 09:15:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.314 [2024-07-14 09:15:38.513291] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:54.314 [2024-07-14 09:15:38.513336] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:54.314 [2024-07-14 09:15:38.513361] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c0d290 00:05:54.314 [2024-07-14 09:15:38.513377] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:54.314 [2024-07-14 09:15:38.514717] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:54.314 [2024-07-14 09:15:38.514746] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:54.314 Passthru0 00:05:54.314 09:15:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.314 09:15:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:54.314 09:15:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.314 09:15:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.314 09:15:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.314 09:15:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:54.314 { 00:05:54.314 "name": "Malloc2", 00:05:54.314 "aliases": [ 00:05:54.314 "ef512cbd-11d8-44be-a1bf-16803480eda1" 00:05:54.314 ], 00:05:54.314 "product_name": "Malloc disk", 00:05:54.314 "block_size": 512, 00:05:54.314 "num_blocks": 16384, 00:05:54.314 "uuid": "ef512cbd-11d8-44be-a1bf-16803480eda1", 00:05:54.314 "assigned_rate_limits": { 00:05:54.314 "rw_ios_per_sec": 0, 00:05:54.314 "rw_mbytes_per_sec": 0, 00:05:54.314 "r_mbytes_per_sec": 0, 00:05:54.314 "w_mbytes_per_sec": 0 00:05:54.314 }, 00:05:54.314 "claimed": true, 00:05:54.314 "claim_type": "exclusive_write", 00:05:54.314 "zoned": false, 00:05:54.314 "supported_io_types": { 00:05:54.314 "read": true, 00:05:54.314 "write": true, 00:05:54.314 "unmap": true, 00:05:54.314 "flush": true, 00:05:54.314 "reset": true, 00:05:54.314 "nvme_admin": false, 00:05:54.314 "nvme_io": false, 00:05:54.314 "nvme_io_md": false, 00:05:54.314 "write_zeroes": true, 00:05:54.314 "zcopy": true, 00:05:54.314 "get_zone_info": false, 00:05:54.314 "zone_management": false, 00:05:54.314 "zone_append": false, 00:05:54.314 "compare": false, 00:05:54.314 "compare_and_write": false, 00:05:54.314 "abort": true, 00:05:54.314 "seek_hole": false, 00:05:54.314 "seek_data": false, 00:05:54.314 "copy": true, 00:05:54.314 "nvme_iov_md": false 00:05:54.314 }, 00:05:54.314 "memory_domains": [ 00:05:54.314 { 00:05:54.314 "dma_device_id": "system", 00:05:54.314 "dma_device_type": 1 00:05:54.314 }, 00:05:54.314 { 00:05:54.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.314 "dma_device_type": 2 00:05:54.314 } 00:05:54.314 ], 00:05:54.314 "driver_specific": {} 00:05:54.314 }, 00:05:54.314 { 00:05:54.314 "name": "Passthru0", 00:05:54.314 "aliases": [ 00:05:54.314 "be800179-dfc0-5b07-8f45-58da458892d1" 00:05:54.314 ], 00:05:54.314 "product_name": "passthru", 00:05:54.314 "block_size": 512, 00:05:54.314 "num_blocks": 16384, 00:05:54.314 "uuid": "be800179-dfc0-5b07-8f45-58da458892d1", 00:05:54.314 "assigned_rate_limits": { 00:05:54.314 "rw_ios_per_sec": 0, 00:05:54.314 "rw_mbytes_per_sec": 0, 00:05:54.314 "r_mbytes_per_sec": 0, 00:05:54.314 "w_mbytes_per_sec": 0 00:05:54.314 }, 00:05:54.314 "claimed": false, 00:05:54.314 "zoned": false, 00:05:54.314 "supported_io_types": { 00:05:54.314 "read": true, 00:05:54.314 "write": true, 00:05:54.314 "unmap": true, 00:05:54.314 "flush": true, 00:05:54.314 "reset": true, 00:05:54.314 "nvme_admin": false, 00:05:54.314 "nvme_io": false, 00:05:54.314 "nvme_io_md": false, 00:05:54.314 "write_zeroes": true, 00:05:54.314 "zcopy": true, 00:05:54.314 "get_zone_info": false, 00:05:54.315 "zone_management": false, 00:05:54.315 "zone_append": false, 00:05:54.315 "compare": false, 00:05:54.315 "compare_and_write": false, 00:05:54.315 "abort": true, 00:05:54.315 "seek_hole": false, 00:05:54.315 "seek_data": false, 00:05:54.315 "copy": true, 00:05:54.315 "nvme_iov_md": false 00:05:54.315 }, 00:05:54.315 "memory_domains": [ 00:05:54.315 { 00:05:54.315 "dma_device_id": "system", 00:05:54.315 "dma_device_type": 1 00:05:54.315 }, 00:05:54.315 { 00:05:54.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.315 "dma_device_type": 2 00:05:54.315 } 00:05:54.315 ], 00:05:54.315 "driver_specific": { 00:05:54.315 "passthru": { 00:05:54.315 "name": "Passthru0", 00:05:54.315 "base_bdev_name": "Malloc2" 00:05:54.315 } 00:05:54.315 } 00:05:54.315 } 00:05:54.315 ]' 00:05:54.315 09:15:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:54.315 09:15:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:54.315 09:15:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:54.315 09:15:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.315 09:15:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.315 09:15:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.315 09:15:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:54.315 09:15:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.315 09:15:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.315 09:15:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.315 09:15:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:54.315 09:15:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.315 09:15:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.315 09:15:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.315 09:15:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:54.315 09:15:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:54.315 09:15:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:54.315 00:05:54.315 real 0m0.228s 00:05:54.315 user 0m0.150s 00:05:54.315 sys 0m0.022s 00:05:54.315 09:15:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.315 09:15:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.315 ************************************ 00:05:54.315 END TEST rpc_daemon_integrity 00:05:54.315 ************************************ 00:05:54.315 09:15:38 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:54.315 09:15:38 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:54.315 09:15:38 rpc -- rpc/rpc.sh@84 -- # killprocess 606630 00:05:54.315 09:15:38 rpc -- common/autotest_common.sh@948 -- # '[' -z 606630 ']' 00:05:54.315 09:15:38 rpc -- common/autotest_common.sh@952 -- # kill -0 606630 00:05:54.315 09:15:38 rpc -- common/autotest_common.sh@953 -- # uname 00:05:54.315 09:15:38 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:54.315 09:15:38 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 606630 00:05:54.315 09:15:38 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:54.315 09:15:38 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:54.315 09:15:38 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 606630' 00:05:54.315 killing process with pid 606630 00:05:54.315 09:15:38 rpc -- common/autotest_common.sh@967 -- # kill 606630 00:05:54.315 09:15:38 rpc -- common/autotest_common.sh@972 -- # wait 606630 00:05:54.881 00:05:54.881 real 0m1.912s 00:05:54.881 user 0m2.444s 00:05:54.881 sys 0m0.582s 00:05:54.881 09:15:39 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.881 09:15:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.881 ************************************ 00:05:54.881 END TEST rpc 00:05:54.881 ************************************ 00:05:54.881 09:15:39 -- common/autotest_common.sh@1142 -- # return 0 00:05:54.881 09:15:39 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:54.881 09:15:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:54.881 09:15:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.881 09:15:39 -- common/autotest_common.sh@10 -- # set +x 00:05:54.881 ************************************ 00:05:54.881 START TEST skip_rpc 00:05:54.881 ************************************ 00:05:54.881 09:15:39 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:54.881 * Looking for test storage... 00:05:54.881 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:54.881 09:15:39 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:54.881 09:15:39 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:54.881 09:15:39 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:54.881 09:15:39 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:54.881 09:15:39 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.881 09:15:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.881 ************************************ 00:05:54.881 START TEST skip_rpc 00:05:54.881 ************************************ 00:05:54.881 09:15:39 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:54.881 09:15:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=607067 00:05:54.881 09:15:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:54.881 09:15:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:54.882 09:15:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:54.882 [2024-07-14 09:15:39.275432] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:05:54.882 [2024-07-14 09:15:39.275516] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid607067 ] 00:05:54.882 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.139 [2024-07-14 09:15:39.341768] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.139 [2024-07-14 09:15:39.432341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.463 09:15:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:00.463 09:15:44 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:00.463 09:15:44 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:00.463 09:15:44 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:00.463 09:15:44 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.463 09:15:44 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:00.463 09:15:44 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.463 09:15:44 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:06:00.463 09:15:44 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.463 09:15:44 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.463 09:15:44 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:00.463 09:15:44 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:00.463 09:15:44 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:00.463 09:15:44 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:00.463 09:15:44 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:00.463 09:15:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:00.463 09:15:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 607067 00:06:00.463 09:15:44 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 607067 ']' 00:06:00.463 09:15:44 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 607067 00:06:00.463 09:15:44 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:06:00.463 09:15:44 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:00.463 09:15:44 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 607067 00:06:00.463 09:15:44 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:00.463 09:15:44 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:00.463 09:15:44 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 607067' 00:06:00.463 killing process with pid 607067 00:06:00.463 09:15:44 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 607067 00:06:00.463 09:15:44 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 607067 00:06:00.463 00:06:00.463 real 0m5.429s 00:06:00.463 user 0m5.103s 00:06:00.463 sys 0m0.325s 00:06:00.463 09:15:44 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.463 09:15:44 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.463 ************************************ 00:06:00.463 END TEST skip_rpc 00:06:00.463 ************************************ 00:06:00.463 09:15:44 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:00.463 09:15:44 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:00.463 09:15:44 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:00.463 09:15:44 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.463 09:15:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.463 ************************************ 00:06:00.463 START TEST skip_rpc_with_json 00:06:00.463 ************************************ 00:06:00.463 09:15:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:06:00.463 09:15:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:00.463 09:15:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=607762 00:06:00.463 09:15:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:00.463 09:15:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:00.463 09:15:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 607762 00:06:00.463 09:15:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 607762 ']' 00:06:00.463 09:15:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.463 09:15:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.463 09:15:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.463 09:15:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.463 09:15:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:00.463 [2024-07-14 09:15:44.746093] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:00.463 [2024-07-14 09:15:44.746191] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid607762 ] 00:06:00.463 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.463 [2024-07-14 09:15:44.803193] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.463 [2024-07-14 09:15:44.891547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.722 09:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.722 09:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:06:00.722 09:15:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:00.722 09:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.722 09:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:00.722 [2024-07-14 09:15:45.154277] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:00.722 request: 00:06:00.722 { 00:06:00.722 "trtype": "tcp", 00:06:00.722 "method": "nvmf_get_transports", 00:06:00.722 "req_id": 1 00:06:00.722 } 00:06:00.722 Got JSON-RPC error response 00:06:00.722 response: 00:06:00.722 { 00:06:00.722 "code": -19, 00:06:00.722 "message": "No such device" 00:06:00.722 } 00:06:00.722 09:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:00.722 09:15:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:00.722 09:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.722 09:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:00.722 [2024-07-14 09:15:45.162407] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:00.722 09:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.722 09:15:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:00.722 09:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.722 09:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:00.980 09:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.980 09:15:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:00.980 { 00:06:00.980 "subsystems": [ 00:06:00.980 { 00:06:00.980 "subsystem": "vfio_user_target", 00:06:00.980 "config": null 00:06:00.980 }, 00:06:00.980 { 00:06:00.980 "subsystem": "keyring", 00:06:00.980 "config": [] 00:06:00.980 }, 00:06:00.980 { 00:06:00.980 "subsystem": "iobuf", 00:06:00.980 "config": [ 00:06:00.980 { 00:06:00.980 "method": "iobuf_set_options", 00:06:00.980 "params": { 00:06:00.980 "small_pool_count": 8192, 00:06:00.980 "large_pool_count": 1024, 00:06:00.980 "small_bufsize": 8192, 00:06:00.980 "large_bufsize": 135168 00:06:00.980 } 00:06:00.980 } 00:06:00.980 ] 00:06:00.980 }, 00:06:00.980 { 00:06:00.980 "subsystem": "sock", 00:06:00.980 "config": [ 00:06:00.980 { 00:06:00.980 "method": "sock_set_default_impl", 00:06:00.980 "params": { 00:06:00.980 "impl_name": "posix" 00:06:00.980 } 00:06:00.980 }, 00:06:00.980 { 00:06:00.980 "method": "sock_impl_set_options", 00:06:00.980 "params": { 00:06:00.980 "impl_name": "ssl", 00:06:00.980 "recv_buf_size": 4096, 00:06:00.980 "send_buf_size": 4096, 00:06:00.980 "enable_recv_pipe": true, 00:06:00.980 "enable_quickack": false, 00:06:00.980 "enable_placement_id": 0, 00:06:00.980 "enable_zerocopy_send_server": true, 00:06:00.980 "enable_zerocopy_send_client": false, 00:06:00.980 "zerocopy_threshold": 0, 00:06:00.980 "tls_version": 0, 00:06:00.980 "enable_ktls": false 00:06:00.980 } 00:06:00.980 }, 00:06:00.980 { 00:06:00.980 "method": "sock_impl_set_options", 00:06:00.980 "params": { 00:06:00.980 "impl_name": "posix", 00:06:00.980 "recv_buf_size": 2097152, 00:06:00.980 "send_buf_size": 2097152, 00:06:00.980 "enable_recv_pipe": true, 00:06:00.980 "enable_quickack": false, 00:06:00.980 "enable_placement_id": 0, 00:06:00.980 "enable_zerocopy_send_server": true, 00:06:00.980 "enable_zerocopy_send_client": false, 00:06:00.980 "zerocopy_threshold": 0, 00:06:00.980 "tls_version": 0, 00:06:00.980 "enable_ktls": false 00:06:00.980 } 00:06:00.980 } 00:06:00.980 ] 00:06:00.980 }, 00:06:00.980 { 00:06:00.980 "subsystem": "vmd", 00:06:00.980 "config": [] 00:06:00.980 }, 00:06:00.980 { 00:06:00.980 "subsystem": "accel", 00:06:00.980 "config": [ 00:06:00.980 { 00:06:00.980 "method": "accel_set_options", 00:06:00.980 "params": { 00:06:00.980 "small_cache_size": 128, 00:06:00.980 "large_cache_size": 16, 00:06:00.980 "task_count": 2048, 00:06:00.980 "sequence_count": 2048, 00:06:00.981 "buf_count": 2048 00:06:00.981 } 00:06:00.981 } 00:06:00.981 ] 00:06:00.981 }, 00:06:00.981 { 00:06:00.981 "subsystem": "bdev", 00:06:00.981 "config": [ 00:06:00.981 { 00:06:00.981 "method": "bdev_set_options", 00:06:00.981 "params": { 00:06:00.981 "bdev_io_pool_size": 65535, 00:06:00.981 "bdev_io_cache_size": 256, 00:06:00.981 "bdev_auto_examine": true, 00:06:00.981 "iobuf_small_cache_size": 128, 00:06:00.981 "iobuf_large_cache_size": 16 00:06:00.981 } 00:06:00.981 }, 00:06:00.981 { 00:06:00.981 "method": "bdev_raid_set_options", 00:06:00.981 "params": { 00:06:00.981 "process_window_size_kb": 1024 00:06:00.981 } 00:06:00.981 }, 00:06:00.981 { 00:06:00.981 "method": "bdev_iscsi_set_options", 00:06:00.981 "params": { 00:06:00.981 "timeout_sec": 30 00:06:00.981 } 00:06:00.981 }, 00:06:00.981 { 00:06:00.981 "method": "bdev_nvme_set_options", 00:06:00.981 "params": { 00:06:00.981 "action_on_timeout": "none", 00:06:00.981 "timeout_us": 0, 00:06:00.981 "timeout_admin_us": 0, 00:06:00.981 "keep_alive_timeout_ms": 10000, 00:06:00.981 "arbitration_burst": 0, 00:06:00.981 "low_priority_weight": 0, 00:06:00.981 "medium_priority_weight": 0, 00:06:00.981 "high_priority_weight": 0, 00:06:00.981 "nvme_adminq_poll_period_us": 10000, 00:06:00.981 "nvme_ioq_poll_period_us": 0, 00:06:00.981 "io_queue_requests": 0, 00:06:00.981 "delay_cmd_submit": true, 00:06:00.981 "transport_retry_count": 4, 00:06:00.981 "bdev_retry_count": 3, 00:06:00.981 "transport_ack_timeout": 0, 00:06:00.981 "ctrlr_loss_timeout_sec": 0, 00:06:00.981 "reconnect_delay_sec": 0, 00:06:00.981 "fast_io_fail_timeout_sec": 0, 00:06:00.981 "disable_auto_failback": false, 00:06:00.981 "generate_uuids": false, 00:06:00.981 "transport_tos": 0, 00:06:00.981 "nvme_error_stat": false, 00:06:00.981 "rdma_srq_size": 0, 00:06:00.981 "io_path_stat": false, 00:06:00.981 "allow_accel_sequence": false, 00:06:00.981 "rdma_max_cq_size": 0, 00:06:00.981 "rdma_cm_event_timeout_ms": 0, 00:06:00.981 "dhchap_digests": [ 00:06:00.981 "sha256", 00:06:00.981 "sha384", 00:06:00.981 "sha512" 00:06:00.981 ], 00:06:00.981 "dhchap_dhgroups": [ 00:06:00.981 "null", 00:06:00.981 "ffdhe2048", 00:06:00.981 "ffdhe3072", 00:06:00.981 "ffdhe4096", 00:06:00.981 "ffdhe6144", 00:06:00.981 "ffdhe8192" 00:06:00.981 ] 00:06:00.981 } 00:06:00.981 }, 00:06:00.981 { 00:06:00.981 "method": "bdev_nvme_set_hotplug", 00:06:00.981 "params": { 00:06:00.981 "period_us": 100000, 00:06:00.981 "enable": false 00:06:00.981 } 00:06:00.981 }, 00:06:00.981 { 00:06:00.981 "method": "bdev_wait_for_examine" 00:06:00.981 } 00:06:00.981 ] 00:06:00.981 }, 00:06:00.981 { 00:06:00.981 "subsystem": "scsi", 00:06:00.981 "config": null 00:06:00.981 }, 00:06:00.981 { 00:06:00.981 "subsystem": "scheduler", 00:06:00.981 "config": [ 00:06:00.981 { 00:06:00.981 "method": "framework_set_scheduler", 00:06:00.981 "params": { 00:06:00.981 "name": "static" 00:06:00.981 } 00:06:00.981 } 00:06:00.981 ] 00:06:00.981 }, 00:06:00.981 { 00:06:00.981 "subsystem": "vhost_scsi", 00:06:00.981 "config": [] 00:06:00.981 }, 00:06:00.981 { 00:06:00.981 "subsystem": "vhost_blk", 00:06:00.981 "config": [] 00:06:00.981 }, 00:06:00.981 { 00:06:00.981 "subsystem": "ublk", 00:06:00.981 "config": [] 00:06:00.981 }, 00:06:00.981 { 00:06:00.981 "subsystem": "nbd", 00:06:00.981 "config": [] 00:06:00.981 }, 00:06:00.981 { 00:06:00.981 "subsystem": "nvmf", 00:06:00.981 "config": [ 00:06:00.981 { 00:06:00.981 "method": "nvmf_set_config", 00:06:00.981 "params": { 00:06:00.981 "discovery_filter": "match_any", 00:06:00.981 "admin_cmd_passthru": { 00:06:00.981 "identify_ctrlr": false 00:06:00.981 } 00:06:00.981 } 00:06:00.981 }, 00:06:00.981 { 00:06:00.981 "method": "nvmf_set_max_subsystems", 00:06:00.981 "params": { 00:06:00.981 "max_subsystems": 1024 00:06:00.981 } 00:06:00.981 }, 00:06:00.981 { 00:06:00.981 "method": "nvmf_set_crdt", 00:06:00.981 "params": { 00:06:00.981 "crdt1": 0, 00:06:00.981 "crdt2": 0, 00:06:00.981 "crdt3": 0 00:06:00.981 } 00:06:00.981 }, 00:06:00.981 { 00:06:00.981 "method": "nvmf_create_transport", 00:06:00.981 "params": { 00:06:00.981 "trtype": "TCP", 00:06:00.981 "max_queue_depth": 128, 00:06:00.981 "max_io_qpairs_per_ctrlr": 127, 00:06:00.981 "in_capsule_data_size": 4096, 00:06:00.981 "max_io_size": 131072, 00:06:00.981 "io_unit_size": 131072, 00:06:00.981 "max_aq_depth": 128, 00:06:00.981 "num_shared_buffers": 511, 00:06:00.981 "buf_cache_size": 4294967295, 00:06:00.981 "dif_insert_or_strip": false, 00:06:00.981 "zcopy": false, 00:06:00.981 "c2h_success": true, 00:06:00.981 "sock_priority": 0, 00:06:00.981 "abort_timeout_sec": 1, 00:06:00.981 "ack_timeout": 0, 00:06:00.981 "data_wr_pool_size": 0 00:06:00.981 } 00:06:00.981 } 00:06:00.981 ] 00:06:00.981 }, 00:06:00.981 { 00:06:00.981 "subsystem": "iscsi", 00:06:00.981 "config": [ 00:06:00.981 { 00:06:00.981 "method": "iscsi_set_options", 00:06:00.981 "params": { 00:06:00.981 "node_base": "iqn.2016-06.io.spdk", 00:06:00.981 "max_sessions": 128, 00:06:00.981 "max_connections_per_session": 2, 00:06:00.981 "max_queue_depth": 64, 00:06:00.981 "default_time2wait": 2, 00:06:00.981 "default_time2retain": 20, 00:06:00.981 "first_burst_length": 8192, 00:06:00.981 "immediate_data": true, 00:06:00.981 "allow_duplicated_isid": false, 00:06:00.981 "error_recovery_level": 0, 00:06:00.981 "nop_timeout": 60, 00:06:00.981 "nop_in_interval": 30, 00:06:00.981 "disable_chap": false, 00:06:00.981 "require_chap": false, 00:06:00.981 "mutual_chap": false, 00:06:00.981 "chap_group": 0, 00:06:00.981 "max_large_datain_per_connection": 64, 00:06:00.981 "max_r2t_per_connection": 4, 00:06:00.981 "pdu_pool_size": 36864, 00:06:00.981 "immediate_data_pool_size": 16384, 00:06:00.981 "data_out_pool_size": 2048 00:06:00.981 } 00:06:00.981 } 00:06:00.981 ] 00:06:00.981 } 00:06:00.981 ] 00:06:00.981 } 00:06:00.981 09:15:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:00.981 09:15:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 607762 00:06:00.981 09:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 607762 ']' 00:06:00.981 09:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 607762 00:06:00.981 09:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:00.981 09:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:00.981 09:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 607762 00:06:00.981 09:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:00.981 09:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:00.981 09:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 607762' 00:06:00.981 killing process with pid 607762 00:06:00.981 09:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 607762 00:06:00.981 09:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 607762 00:06:01.547 09:15:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=607904 00:06:01.547 09:15:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:01.547 09:15:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:06.810 09:15:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 607904 00:06:06.810 09:15:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 607904 ']' 00:06:06.810 09:15:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 607904 00:06:06.810 09:15:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:06.810 09:15:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:06.810 09:15:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 607904 00:06:06.810 09:15:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:06.810 09:15:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:06.810 09:15:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 607904' 00:06:06.810 killing process with pid 607904 00:06:06.810 09:15:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 607904 00:06:06.810 09:15:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 607904 00:06:06.810 09:15:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:06.810 09:15:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:06.810 00:06:06.810 real 0m6.492s 00:06:06.810 user 0m6.080s 00:06:06.810 sys 0m0.714s 00:06:06.810 09:15:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.810 09:15:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:06.810 ************************************ 00:06:06.810 END TEST skip_rpc_with_json 00:06:06.810 ************************************ 00:06:06.810 09:15:51 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:06.810 09:15:51 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:06.810 09:15:51 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:06.810 09:15:51 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.810 09:15:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.810 ************************************ 00:06:06.810 START TEST skip_rpc_with_delay 00:06:06.810 ************************************ 00:06:06.810 09:15:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:06:06.810 09:15:51 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:06.810 09:15:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:06:06.810 09:15:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:06.810 09:15:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:06.810 09:15:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:06.810 09:15:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:06.810 09:15:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:06.810 09:15:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:06.810 09:15:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:06.810 09:15:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:06.810 09:15:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:06.810 09:15:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:07.068 [2024-07-14 09:15:51.287250] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:07.068 [2024-07-14 09:15:51.287371] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:07.068 09:15:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:06:07.068 09:15:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:07.068 09:15:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:07.068 09:15:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:07.068 00:06:07.068 real 0m0.068s 00:06:07.068 user 0m0.046s 00:06:07.068 sys 0m0.021s 00:06:07.069 09:15:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.069 09:15:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:07.069 ************************************ 00:06:07.069 END TEST skip_rpc_with_delay 00:06:07.069 ************************************ 00:06:07.069 09:15:51 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:07.069 09:15:51 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:07.069 09:15:51 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:07.069 09:15:51 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:07.069 09:15:51 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:07.069 09:15:51 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.069 09:15:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.069 ************************************ 00:06:07.069 START TEST exit_on_failed_rpc_init 00:06:07.069 ************************************ 00:06:07.069 09:15:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:06:07.069 09:15:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=608614 00:06:07.069 09:15:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:07.069 09:15:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 608614 00:06:07.069 09:15:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 608614 ']' 00:06:07.069 09:15:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.069 09:15:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:07.069 09:15:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.069 09:15:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:07.069 09:15:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:07.069 [2024-07-14 09:15:51.390786] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:07.069 [2024-07-14 09:15:51.390889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid608614 ] 00:06:07.069 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.069 [2024-07-14 09:15:51.447595] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.327 [2024-07-14 09:15:51.537840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.585 09:15:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:07.585 09:15:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:06:07.585 09:15:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:07.585 09:15:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:07.585 09:15:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:06:07.585 09:15:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:07.585 09:15:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:07.585 09:15:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:07.585 09:15:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:07.585 09:15:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:07.585 09:15:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:07.585 09:15:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:07.585 09:15:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:07.585 09:15:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:07.585 09:15:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:07.585 [2024-07-14 09:15:51.850997] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:07.585 [2024-07-14 09:15:51.851092] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid608627 ] 00:06:07.585 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.585 [2024-07-14 09:15:51.911942] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.585 [2024-07-14 09:15:52.006501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.585 [2024-07-14 09:15:52.006600] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:07.585 [2024-07-14 09:15:52.006621] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:07.585 [2024-07-14 09:15:52.006635] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:07.843 09:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:07.843 09:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:07.843 09:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:07.843 09:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:07.843 09:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:07.843 09:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:07.843 09:15:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:07.843 09:15:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 608614 00:06:07.843 09:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 608614 ']' 00:06:07.843 09:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 608614 00:06:07.843 09:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:06:07.843 09:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:07.843 09:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 608614 00:06:07.843 09:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:07.843 09:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:07.843 09:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 608614' 00:06:07.843 killing process with pid 608614 00:06:07.843 09:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 608614 00:06:07.843 09:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 608614 00:06:08.101 00:06:08.101 real 0m1.178s 00:06:08.101 user 0m1.259s 00:06:08.101 sys 0m0.460s 00:06:08.101 09:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.101 09:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:08.101 ************************************ 00:06:08.101 END TEST exit_on_failed_rpc_init 00:06:08.101 ************************************ 00:06:08.101 09:15:52 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:08.101 09:15:52 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:08.101 00:06:08.101 real 0m13.405s 00:06:08.101 user 0m12.590s 00:06:08.101 sys 0m1.675s 00:06:08.101 09:15:52 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.101 09:15:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.101 ************************************ 00:06:08.101 END TEST skip_rpc 00:06:08.101 ************************************ 00:06:08.360 09:15:52 -- common/autotest_common.sh@1142 -- # return 0 00:06:08.360 09:15:52 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:08.360 09:15:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:08.360 09:15:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.360 09:15:52 -- common/autotest_common.sh@10 -- # set +x 00:06:08.360 ************************************ 00:06:08.360 START TEST rpc_client 00:06:08.360 ************************************ 00:06:08.360 09:15:52 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:08.360 * Looking for test storage... 00:06:08.360 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:08.360 09:15:52 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:08.360 OK 00:06:08.360 09:15:52 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:08.360 00:06:08.360 real 0m0.067s 00:06:08.360 user 0m0.025s 00:06:08.360 sys 0m0.047s 00:06:08.360 09:15:52 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.360 09:15:52 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:08.360 ************************************ 00:06:08.360 END TEST rpc_client 00:06:08.360 ************************************ 00:06:08.360 09:15:52 -- common/autotest_common.sh@1142 -- # return 0 00:06:08.360 09:15:52 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:08.360 09:15:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:08.360 09:15:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.360 09:15:52 -- common/autotest_common.sh@10 -- # set +x 00:06:08.360 ************************************ 00:06:08.360 START TEST json_config 00:06:08.360 ************************************ 00:06:08.360 09:15:52 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:08.360 09:15:52 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:08.360 09:15:52 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:08.360 09:15:52 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:08.360 09:15:52 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:08.360 09:15:52 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:08.360 09:15:52 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:08.360 09:15:52 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:08.360 09:15:52 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:08.360 09:15:52 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:08.360 09:15:52 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:08.360 09:15:52 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:08.360 09:15:52 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:08.360 09:15:52 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:08.360 09:15:52 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:08.360 09:15:52 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:08.360 09:15:52 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:08.360 09:15:52 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:08.360 09:15:52 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:08.360 09:15:52 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:08.360 09:15:52 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:08.360 09:15:52 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:08.360 09:15:52 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:08.360 09:15:52 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.360 09:15:52 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.360 09:15:52 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.360 09:15:52 json_config -- paths/export.sh@5 -- # export PATH 00:06:08.360 09:15:52 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.360 09:15:52 json_config -- nvmf/common.sh@47 -- # : 0 00:06:08.360 09:15:52 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:08.360 09:15:52 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:08.360 09:15:52 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:08.360 09:15:52 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:08.360 09:15:52 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:08.360 09:15:52 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:08.360 09:15:52 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:08.360 09:15:52 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:08.360 09:15:52 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:08.360 09:15:52 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:08.360 09:15:52 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:08.360 09:15:52 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:08.361 09:15:52 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:08.361 09:15:52 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:08.361 09:15:52 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:08.361 09:15:52 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:08.361 09:15:52 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:08.361 09:15:52 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:08.361 09:15:52 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:08.361 09:15:52 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:08.361 09:15:52 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:08.361 09:15:52 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:08.361 09:15:52 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:08.361 09:15:52 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:06:08.361 INFO: JSON configuration test init 00:06:08.361 09:15:52 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:06:08.361 09:15:52 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:06:08.361 09:15:52 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:08.361 09:15:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.361 09:15:52 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:06:08.361 09:15:52 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:08.361 09:15:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.361 09:15:52 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:06:08.361 09:15:52 json_config -- json_config/common.sh@9 -- # local app=target 00:06:08.361 09:15:52 json_config -- json_config/common.sh@10 -- # shift 00:06:08.361 09:15:52 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:08.361 09:15:52 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:08.361 09:15:52 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:08.361 09:15:52 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:08.361 09:15:52 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:08.361 09:15:52 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=608869 00:06:08.361 09:15:52 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:08.361 09:15:52 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:08.361 Waiting for target to run... 00:06:08.361 09:15:52 json_config -- json_config/common.sh@25 -- # waitforlisten 608869 /var/tmp/spdk_tgt.sock 00:06:08.361 09:15:52 json_config -- common/autotest_common.sh@829 -- # '[' -z 608869 ']' 00:06:08.361 09:15:52 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:08.361 09:15:52 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.361 09:15:52 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:08.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:08.361 09:15:52 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.361 09:15:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.361 [2024-07-14 09:15:52.810287] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:08.361 [2024-07-14 09:15:52.810388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid608869 ] 00:06:08.619 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.878 [2024-07-14 09:15:53.157547] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.878 [2024-07-14 09:15:53.224345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.443 09:15:53 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.443 09:15:53 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:09.443 09:15:53 json_config -- json_config/common.sh@26 -- # echo '' 00:06:09.443 00:06:09.443 09:15:53 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:06:09.443 09:15:53 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:06:09.443 09:15:53 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:09.443 09:15:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:09.443 09:15:53 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:06:09.443 09:15:53 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:06:09.443 09:15:53 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:09.443 09:15:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:09.443 09:15:53 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:09.443 09:15:53 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:06:09.443 09:15:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:12.728 09:15:56 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:06:12.728 09:15:56 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:12.728 09:15:56 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:12.728 09:15:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:12.728 09:15:56 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:12.728 09:15:56 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:12.728 09:15:56 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:12.728 09:15:56 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:12.728 09:15:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:12.728 09:15:56 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:12.986 09:15:57 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:12.986 09:15:57 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:12.986 09:15:57 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:12.986 09:15:57 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:06:12.986 09:15:57 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:12.986 09:15:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:12.986 09:15:57 json_config -- json_config/json_config.sh@55 -- # return 0 00:06:12.986 09:15:57 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:06:12.986 09:15:57 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:12.986 09:15:57 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:12.986 09:15:57 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:06:12.986 09:15:57 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:06:12.986 09:15:57 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:06:12.986 09:15:57 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:12.986 09:15:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:12.986 09:15:57 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:12.986 09:15:57 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:06:12.986 09:15:57 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:06:12.986 09:15:57 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:12.986 09:15:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:13.245 MallocForNvmf0 00:06:13.245 09:15:57 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:13.245 09:15:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:13.503 MallocForNvmf1 00:06:13.503 09:15:57 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:13.503 09:15:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:13.503 [2024-07-14 09:15:57.954399] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:13.761 09:15:57 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:13.761 09:15:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:14.018 09:15:58 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:14.019 09:15:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:14.019 09:15:58 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:14.019 09:15:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:14.277 09:15:58 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:14.277 09:15:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:14.535 [2024-07-14 09:15:58.933690] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:14.535 09:15:58 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:06:14.535 09:15:58 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:14.535 09:15:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:14.535 09:15:58 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:14.535 09:15:58 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:14.535 09:15:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:14.793 09:15:58 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:14.793 09:15:58 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:14.793 09:15:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:14.793 MallocBdevForConfigChangeCheck 00:06:15.051 09:15:59 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:15.051 09:15:59 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:15.051 09:15:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:15.051 09:15:59 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:15.051 09:15:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:15.308 09:15:59 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:15.308 INFO: shutting down applications... 00:06:15.308 09:15:59 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:15.308 09:15:59 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:15.308 09:15:59 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:15.308 09:15:59 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:17.237 Calling clear_iscsi_subsystem 00:06:17.237 Calling clear_nvmf_subsystem 00:06:17.237 Calling clear_nbd_subsystem 00:06:17.237 Calling clear_ublk_subsystem 00:06:17.237 Calling clear_vhost_blk_subsystem 00:06:17.237 Calling clear_vhost_scsi_subsystem 00:06:17.237 Calling clear_bdev_subsystem 00:06:17.237 09:16:01 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:17.237 09:16:01 json_config -- json_config/json_config.sh@343 -- # count=100 00:06:17.237 09:16:01 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:17.237 09:16:01 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:17.237 09:16:01 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:17.237 09:16:01 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:17.495 09:16:01 json_config -- json_config/json_config.sh@345 -- # break 00:06:17.495 09:16:01 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:17.495 09:16:01 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:17.495 09:16:01 json_config -- json_config/common.sh@31 -- # local app=target 00:06:17.495 09:16:01 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:17.495 09:16:01 json_config -- json_config/common.sh@35 -- # [[ -n 608869 ]] 00:06:17.495 09:16:01 json_config -- json_config/common.sh@38 -- # kill -SIGINT 608869 00:06:17.495 09:16:01 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:17.495 09:16:01 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:17.495 09:16:01 json_config -- json_config/common.sh@41 -- # kill -0 608869 00:06:17.495 09:16:01 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:17.752 09:16:02 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:17.752 09:16:02 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:17.752 09:16:02 json_config -- json_config/common.sh@41 -- # kill -0 608869 00:06:17.752 09:16:02 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:17.752 09:16:02 json_config -- json_config/common.sh@43 -- # break 00:06:17.752 09:16:02 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:17.752 09:16:02 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:17.752 SPDK target shutdown done 00:06:17.752 09:16:02 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:17.752 INFO: relaunching applications... 00:06:18.010 09:16:02 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:18.010 09:16:02 json_config -- json_config/common.sh@9 -- # local app=target 00:06:18.010 09:16:02 json_config -- json_config/common.sh@10 -- # shift 00:06:18.010 09:16:02 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:18.010 09:16:02 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:18.010 09:16:02 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:18.010 09:16:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:18.010 09:16:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:18.010 09:16:02 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=610197 00:06:18.010 09:16:02 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:18.010 09:16:02 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:18.010 Waiting for target to run... 00:06:18.011 09:16:02 json_config -- json_config/common.sh@25 -- # waitforlisten 610197 /var/tmp/spdk_tgt.sock 00:06:18.011 09:16:02 json_config -- common/autotest_common.sh@829 -- # '[' -z 610197 ']' 00:06:18.011 09:16:02 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:18.011 09:16:02 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.011 09:16:02 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:18.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:18.011 09:16:02 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.011 09:16:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:18.011 [2024-07-14 09:16:02.259211] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:18.011 [2024-07-14 09:16:02.259306] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid610197 ] 00:06:18.011 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.579 [2024-07-14 09:16:02.790780] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.579 [2024-07-14 09:16:02.872931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.860 [2024-07-14 09:16:05.907541] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:21.860 [2024-07-14 09:16:05.939999] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:22.426 09:16:06 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:22.426 09:16:06 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:22.426 09:16:06 json_config -- json_config/common.sh@26 -- # echo '' 00:06:22.426 00:06:22.426 09:16:06 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:22.426 09:16:06 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:22.426 INFO: Checking if target configuration is the same... 00:06:22.426 09:16:06 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:22.426 09:16:06 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:22.426 09:16:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:22.426 + '[' 2 -ne 2 ']' 00:06:22.426 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:22.426 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:22.426 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:22.426 +++ basename /dev/fd/62 00:06:22.426 ++ mktemp /tmp/62.XXX 00:06:22.426 + tmp_file_1=/tmp/62.4bg 00:06:22.426 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:22.426 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:22.426 + tmp_file_2=/tmp/spdk_tgt_config.json.PdF 00:06:22.426 + ret=0 00:06:22.426 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:22.684 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:22.684 + diff -u /tmp/62.4bg /tmp/spdk_tgt_config.json.PdF 00:06:22.684 + echo 'INFO: JSON config files are the same' 00:06:22.684 INFO: JSON config files are the same 00:06:22.684 + rm /tmp/62.4bg /tmp/spdk_tgt_config.json.PdF 00:06:22.684 + exit 0 00:06:22.684 09:16:07 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:22.684 09:16:07 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:22.684 INFO: changing configuration and checking if this can be detected... 00:06:22.684 09:16:07 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:22.684 09:16:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:22.942 09:16:07 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:22.942 09:16:07 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:22.942 09:16:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:22.942 + '[' 2 -ne 2 ']' 00:06:22.942 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:22.942 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:22.942 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:22.942 +++ basename /dev/fd/62 00:06:22.942 ++ mktemp /tmp/62.XXX 00:06:22.942 + tmp_file_1=/tmp/62.UQW 00:06:22.942 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:22.942 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:22.942 + tmp_file_2=/tmp/spdk_tgt_config.json.n1s 00:06:22.942 + ret=0 00:06:22.942 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:23.508 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:23.508 + diff -u /tmp/62.UQW /tmp/spdk_tgt_config.json.n1s 00:06:23.508 + ret=1 00:06:23.508 + echo '=== Start of file: /tmp/62.UQW ===' 00:06:23.508 + cat /tmp/62.UQW 00:06:23.508 + echo '=== End of file: /tmp/62.UQW ===' 00:06:23.508 + echo '' 00:06:23.508 + echo '=== Start of file: /tmp/spdk_tgt_config.json.n1s ===' 00:06:23.508 + cat /tmp/spdk_tgt_config.json.n1s 00:06:23.508 + echo '=== End of file: /tmp/spdk_tgt_config.json.n1s ===' 00:06:23.508 + echo '' 00:06:23.508 + rm /tmp/62.UQW /tmp/spdk_tgt_config.json.n1s 00:06:23.508 + exit 1 00:06:23.508 09:16:07 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:23.508 INFO: configuration change detected. 00:06:23.508 09:16:07 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:23.508 09:16:07 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:23.508 09:16:07 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:23.508 09:16:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:23.508 09:16:07 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:23.508 09:16:07 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:23.508 09:16:07 json_config -- json_config/json_config.sh@317 -- # [[ -n 610197 ]] 00:06:23.508 09:16:07 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:23.508 09:16:07 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:23.508 09:16:07 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:23.508 09:16:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:23.508 09:16:07 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:23.508 09:16:07 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:23.508 09:16:07 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:23.508 09:16:07 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:23.509 09:16:07 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:23.509 09:16:07 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:23.509 09:16:07 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:23.509 09:16:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:23.509 09:16:07 json_config -- json_config/json_config.sh@323 -- # killprocess 610197 00:06:23.509 09:16:07 json_config -- common/autotest_common.sh@948 -- # '[' -z 610197 ']' 00:06:23.509 09:16:07 json_config -- common/autotest_common.sh@952 -- # kill -0 610197 00:06:23.509 09:16:07 json_config -- common/autotest_common.sh@953 -- # uname 00:06:23.509 09:16:07 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:23.509 09:16:07 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 610197 00:06:23.509 09:16:07 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:23.509 09:16:07 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:23.509 09:16:07 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 610197' 00:06:23.509 killing process with pid 610197 00:06:23.509 09:16:07 json_config -- common/autotest_common.sh@967 -- # kill 610197 00:06:23.509 09:16:07 json_config -- common/autotest_common.sh@972 -- # wait 610197 00:06:25.405 09:16:09 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:25.405 09:16:09 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:25.405 09:16:09 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:25.405 09:16:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.405 09:16:09 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:25.405 09:16:09 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:25.405 INFO: Success 00:06:25.405 00:06:25.405 real 0m16.813s 00:06:25.405 user 0m18.697s 00:06:25.405 sys 0m2.147s 00:06:25.405 09:16:09 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.405 09:16:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.405 ************************************ 00:06:25.405 END TEST json_config 00:06:25.405 ************************************ 00:06:25.405 09:16:09 -- common/autotest_common.sh@1142 -- # return 0 00:06:25.405 09:16:09 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:25.405 09:16:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:25.405 09:16:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.405 09:16:09 -- common/autotest_common.sh@10 -- # set +x 00:06:25.405 ************************************ 00:06:25.405 START TEST json_config_extra_key 00:06:25.405 ************************************ 00:06:25.405 09:16:09 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:25.405 09:16:09 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:25.405 09:16:09 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:25.405 09:16:09 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:25.405 09:16:09 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:25.405 09:16:09 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:25.405 09:16:09 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:25.405 09:16:09 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:25.405 09:16:09 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:25.405 09:16:09 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:25.405 09:16:09 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:25.405 09:16:09 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:25.405 09:16:09 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:25.405 09:16:09 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:25.405 09:16:09 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:25.405 09:16:09 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:25.405 09:16:09 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:25.405 09:16:09 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:25.405 09:16:09 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:25.405 09:16:09 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:25.405 09:16:09 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:25.405 09:16:09 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:25.405 09:16:09 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:25.405 09:16:09 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.405 09:16:09 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.405 09:16:09 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.405 09:16:09 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:25.405 09:16:09 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.405 09:16:09 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:25.405 09:16:09 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:25.405 09:16:09 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:25.405 09:16:09 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:25.405 09:16:09 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:25.405 09:16:09 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:25.405 09:16:09 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:25.405 09:16:09 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:25.405 09:16:09 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:25.405 09:16:09 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:25.405 09:16:09 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:25.405 09:16:09 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:25.405 09:16:09 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:25.405 09:16:09 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:25.405 09:16:09 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:25.405 09:16:09 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:25.405 09:16:09 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:25.405 09:16:09 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:25.405 09:16:09 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:25.405 09:16:09 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:25.405 INFO: launching applications... 00:06:25.405 09:16:09 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:25.405 09:16:09 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:25.405 09:16:09 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:25.405 09:16:09 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:25.405 09:16:09 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:25.405 09:16:09 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:25.405 09:16:09 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:25.405 09:16:09 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:25.405 09:16:09 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=611117 00:06:25.405 09:16:09 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:25.405 09:16:09 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:25.405 Waiting for target to run... 00:06:25.405 09:16:09 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 611117 /var/tmp/spdk_tgt.sock 00:06:25.405 09:16:09 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 611117 ']' 00:06:25.406 09:16:09 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:25.406 09:16:09 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:25.406 09:16:09 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:25.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:25.406 09:16:09 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:25.406 09:16:09 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:25.406 [2024-07-14 09:16:09.671190] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:25.406 [2024-07-14 09:16:09.671276] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid611117 ] 00:06:25.406 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.972 [2024-07-14 09:16:10.171643] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.972 [2024-07-14 09:16:10.253813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.230 09:16:10 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:26.230 09:16:10 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:26.230 09:16:10 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:26.230 00:06:26.230 09:16:10 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:26.230 INFO: shutting down applications... 00:06:26.230 09:16:10 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:26.230 09:16:10 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:26.230 09:16:10 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:26.230 09:16:10 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 611117 ]] 00:06:26.230 09:16:10 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 611117 00:06:26.230 09:16:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:26.230 09:16:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:26.230 09:16:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 611117 00:06:26.230 09:16:10 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:26.795 09:16:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:26.795 09:16:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:26.795 09:16:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 611117 00:06:26.795 09:16:11 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:26.795 09:16:11 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:26.795 09:16:11 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:26.795 09:16:11 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:26.795 SPDK target shutdown done 00:06:26.795 09:16:11 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:26.795 Success 00:06:26.795 00:06:26.795 real 0m1.602s 00:06:26.795 user 0m1.440s 00:06:26.795 sys 0m0.597s 00:06:26.795 09:16:11 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.795 09:16:11 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:26.795 ************************************ 00:06:26.795 END TEST json_config_extra_key 00:06:26.795 ************************************ 00:06:26.795 09:16:11 -- common/autotest_common.sh@1142 -- # return 0 00:06:26.795 09:16:11 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:26.795 09:16:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:26.795 09:16:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.795 09:16:11 -- common/autotest_common.sh@10 -- # set +x 00:06:26.795 ************************************ 00:06:26.795 START TEST alias_rpc 00:06:26.795 ************************************ 00:06:26.795 09:16:11 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:27.053 * Looking for test storage... 00:06:27.053 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:27.053 09:16:11 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:27.053 09:16:11 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=611426 00:06:27.053 09:16:11 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:27.053 09:16:11 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 611426 00:06:27.053 09:16:11 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 611426 ']' 00:06:27.053 09:16:11 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.053 09:16:11 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.053 09:16:11 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.053 09:16:11 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.053 09:16:11 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.053 [2024-07-14 09:16:11.312053] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:27.053 [2024-07-14 09:16:11.312140] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid611426 ] 00:06:27.053 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.053 [2024-07-14 09:16:11.371916] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.053 [2024-07-14 09:16:11.461424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.312 09:16:11 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.312 09:16:11 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:27.312 09:16:11 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:27.570 09:16:11 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 611426 00:06:27.570 09:16:11 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 611426 ']' 00:06:27.570 09:16:11 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 611426 00:06:27.570 09:16:11 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:27.570 09:16:11 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:27.570 09:16:11 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 611426 00:06:27.570 09:16:12 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:27.570 09:16:12 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:27.570 09:16:12 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 611426' 00:06:27.570 killing process with pid 611426 00:06:27.570 09:16:12 alias_rpc -- common/autotest_common.sh@967 -- # kill 611426 00:06:27.570 09:16:12 alias_rpc -- common/autotest_common.sh@972 -- # wait 611426 00:06:28.137 00:06:28.137 real 0m1.198s 00:06:28.137 user 0m1.289s 00:06:28.137 sys 0m0.398s 00:06:28.137 09:16:12 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.137 09:16:12 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.137 ************************************ 00:06:28.137 END TEST alias_rpc 00:06:28.137 ************************************ 00:06:28.137 09:16:12 -- common/autotest_common.sh@1142 -- # return 0 00:06:28.137 09:16:12 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:28.137 09:16:12 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:28.137 09:16:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:28.137 09:16:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.137 09:16:12 -- common/autotest_common.sh@10 -- # set +x 00:06:28.137 ************************************ 00:06:28.137 START TEST spdkcli_tcp 00:06:28.137 ************************************ 00:06:28.137 09:16:12 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:28.137 * Looking for test storage... 00:06:28.137 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:28.137 09:16:12 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:28.137 09:16:12 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:28.137 09:16:12 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:28.137 09:16:12 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:28.137 09:16:12 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:28.137 09:16:12 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:28.137 09:16:12 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:28.137 09:16:12 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:28.137 09:16:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:28.137 09:16:12 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=611612 00:06:28.137 09:16:12 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:28.137 09:16:12 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 611612 00:06:28.137 09:16:12 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 611612 ']' 00:06:28.137 09:16:12 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.137 09:16:12 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:28.137 09:16:12 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.137 09:16:12 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:28.137 09:16:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:28.137 [2024-07-14 09:16:12.562470] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:28.138 [2024-07-14 09:16:12.562558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid611612 ] 00:06:28.138 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.396 [2024-07-14 09:16:12.619871] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.396 [2024-07-14 09:16:12.706259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.396 [2024-07-14 09:16:12.706262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.654 09:16:12 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:28.654 09:16:12 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:28.654 09:16:12 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=611629 00:06:28.654 09:16:12 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:28.654 09:16:12 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:28.911 [ 00:06:28.911 "bdev_malloc_delete", 00:06:28.911 "bdev_malloc_create", 00:06:28.911 "bdev_null_resize", 00:06:28.911 "bdev_null_delete", 00:06:28.911 "bdev_null_create", 00:06:28.911 "bdev_nvme_cuse_unregister", 00:06:28.911 "bdev_nvme_cuse_register", 00:06:28.911 "bdev_opal_new_user", 00:06:28.911 "bdev_opal_set_lock_state", 00:06:28.911 "bdev_opal_delete", 00:06:28.911 "bdev_opal_get_info", 00:06:28.911 "bdev_opal_create", 00:06:28.911 "bdev_nvme_opal_revert", 00:06:28.911 "bdev_nvme_opal_init", 00:06:28.911 "bdev_nvme_send_cmd", 00:06:28.911 "bdev_nvme_get_path_iostat", 00:06:28.911 "bdev_nvme_get_mdns_discovery_info", 00:06:28.911 "bdev_nvme_stop_mdns_discovery", 00:06:28.911 "bdev_nvme_start_mdns_discovery", 00:06:28.911 "bdev_nvme_set_multipath_policy", 00:06:28.911 "bdev_nvme_set_preferred_path", 00:06:28.911 "bdev_nvme_get_io_paths", 00:06:28.911 "bdev_nvme_remove_error_injection", 00:06:28.911 "bdev_nvme_add_error_injection", 00:06:28.911 "bdev_nvme_get_discovery_info", 00:06:28.911 "bdev_nvme_stop_discovery", 00:06:28.911 "bdev_nvme_start_discovery", 00:06:28.911 "bdev_nvme_get_controller_health_info", 00:06:28.911 "bdev_nvme_disable_controller", 00:06:28.911 "bdev_nvme_enable_controller", 00:06:28.911 "bdev_nvme_reset_controller", 00:06:28.911 "bdev_nvme_get_transport_statistics", 00:06:28.911 "bdev_nvme_apply_firmware", 00:06:28.911 "bdev_nvme_detach_controller", 00:06:28.911 "bdev_nvme_get_controllers", 00:06:28.911 "bdev_nvme_attach_controller", 00:06:28.911 "bdev_nvme_set_hotplug", 00:06:28.911 "bdev_nvme_set_options", 00:06:28.911 "bdev_passthru_delete", 00:06:28.911 "bdev_passthru_create", 00:06:28.911 "bdev_lvol_set_parent_bdev", 00:06:28.911 "bdev_lvol_set_parent", 00:06:28.911 "bdev_lvol_check_shallow_copy", 00:06:28.911 "bdev_lvol_start_shallow_copy", 00:06:28.911 "bdev_lvol_grow_lvstore", 00:06:28.911 "bdev_lvol_get_lvols", 00:06:28.911 "bdev_lvol_get_lvstores", 00:06:28.911 "bdev_lvol_delete", 00:06:28.911 "bdev_lvol_set_read_only", 00:06:28.911 "bdev_lvol_resize", 00:06:28.911 "bdev_lvol_decouple_parent", 00:06:28.911 "bdev_lvol_inflate", 00:06:28.911 "bdev_lvol_rename", 00:06:28.911 "bdev_lvol_clone_bdev", 00:06:28.911 "bdev_lvol_clone", 00:06:28.911 "bdev_lvol_snapshot", 00:06:28.911 "bdev_lvol_create", 00:06:28.911 "bdev_lvol_delete_lvstore", 00:06:28.911 "bdev_lvol_rename_lvstore", 00:06:28.911 "bdev_lvol_create_lvstore", 00:06:28.911 "bdev_raid_set_options", 00:06:28.911 "bdev_raid_remove_base_bdev", 00:06:28.911 "bdev_raid_add_base_bdev", 00:06:28.911 "bdev_raid_delete", 00:06:28.911 "bdev_raid_create", 00:06:28.911 "bdev_raid_get_bdevs", 00:06:28.911 "bdev_error_inject_error", 00:06:28.911 "bdev_error_delete", 00:06:28.911 "bdev_error_create", 00:06:28.911 "bdev_split_delete", 00:06:28.911 "bdev_split_create", 00:06:28.911 "bdev_delay_delete", 00:06:28.911 "bdev_delay_create", 00:06:28.911 "bdev_delay_update_latency", 00:06:28.911 "bdev_zone_block_delete", 00:06:28.911 "bdev_zone_block_create", 00:06:28.911 "blobfs_create", 00:06:28.911 "blobfs_detect", 00:06:28.912 "blobfs_set_cache_size", 00:06:28.912 "bdev_aio_delete", 00:06:28.912 "bdev_aio_rescan", 00:06:28.912 "bdev_aio_create", 00:06:28.912 "bdev_ftl_set_property", 00:06:28.912 "bdev_ftl_get_properties", 00:06:28.912 "bdev_ftl_get_stats", 00:06:28.912 "bdev_ftl_unmap", 00:06:28.912 "bdev_ftl_unload", 00:06:28.912 "bdev_ftl_delete", 00:06:28.912 "bdev_ftl_load", 00:06:28.912 "bdev_ftl_create", 00:06:28.912 "bdev_virtio_attach_controller", 00:06:28.912 "bdev_virtio_scsi_get_devices", 00:06:28.912 "bdev_virtio_detach_controller", 00:06:28.912 "bdev_virtio_blk_set_hotplug", 00:06:28.912 "bdev_iscsi_delete", 00:06:28.912 "bdev_iscsi_create", 00:06:28.912 "bdev_iscsi_set_options", 00:06:28.912 "accel_error_inject_error", 00:06:28.912 "ioat_scan_accel_module", 00:06:28.912 "dsa_scan_accel_module", 00:06:28.912 "iaa_scan_accel_module", 00:06:28.912 "vfu_virtio_create_scsi_endpoint", 00:06:28.912 "vfu_virtio_scsi_remove_target", 00:06:28.912 "vfu_virtio_scsi_add_target", 00:06:28.912 "vfu_virtio_create_blk_endpoint", 00:06:28.912 "vfu_virtio_delete_endpoint", 00:06:28.912 "keyring_file_remove_key", 00:06:28.912 "keyring_file_add_key", 00:06:28.912 "keyring_linux_set_options", 00:06:28.912 "iscsi_get_histogram", 00:06:28.912 "iscsi_enable_histogram", 00:06:28.912 "iscsi_set_options", 00:06:28.912 "iscsi_get_auth_groups", 00:06:28.912 "iscsi_auth_group_remove_secret", 00:06:28.912 "iscsi_auth_group_add_secret", 00:06:28.912 "iscsi_delete_auth_group", 00:06:28.912 "iscsi_create_auth_group", 00:06:28.912 "iscsi_set_discovery_auth", 00:06:28.912 "iscsi_get_options", 00:06:28.912 "iscsi_target_node_request_logout", 00:06:28.912 "iscsi_target_node_set_redirect", 00:06:28.912 "iscsi_target_node_set_auth", 00:06:28.912 "iscsi_target_node_add_lun", 00:06:28.912 "iscsi_get_stats", 00:06:28.912 "iscsi_get_connections", 00:06:28.912 "iscsi_portal_group_set_auth", 00:06:28.912 "iscsi_start_portal_group", 00:06:28.912 "iscsi_delete_portal_group", 00:06:28.912 "iscsi_create_portal_group", 00:06:28.912 "iscsi_get_portal_groups", 00:06:28.912 "iscsi_delete_target_node", 00:06:28.912 "iscsi_target_node_remove_pg_ig_maps", 00:06:28.912 "iscsi_target_node_add_pg_ig_maps", 00:06:28.912 "iscsi_create_target_node", 00:06:28.912 "iscsi_get_target_nodes", 00:06:28.912 "iscsi_delete_initiator_group", 00:06:28.912 "iscsi_initiator_group_remove_initiators", 00:06:28.912 "iscsi_initiator_group_add_initiators", 00:06:28.912 "iscsi_create_initiator_group", 00:06:28.912 "iscsi_get_initiator_groups", 00:06:28.912 "nvmf_set_crdt", 00:06:28.912 "nvmf_set_config", 00:06:28.912 "nvmf_set_max_subsystems", 00:06:28.912 "nvmf_stop_mdns_prr", 00:06:28.912 "nvmf_publish_mdns_prr", 00:06:28.912 "nvmf_subsystem_get_listeners", 00:06:28.912 "nvmf_subsystem_get_qpairs", 00:06:28.912 "nvmf_subsystem_get_controllers", 00:06:28.912 "nvmf_get_stats", 00:06:28.912 "nvmf_get_transports", 00:06:28.912 "nvmf_create_transport", 00:06:28.912 "nvmf_get_targets", 00:06:28.912 "nvmf_delete_target", 00:06:28.912 "nvmf_create_target", 00:06:28.912 "nvmf_subsystem_allow_any_host", 00:06:28.912 "nvmf_subsystem_remove_host", 00:06:28.912 "nvmf_subsystem_add_host", 00:06:28.912 "nvmf_ns_remove_host", 00:06:28.912 "nvmf_ns_add_host", 00:06:28.912 "nvmf_subsystem_remove_ns", 00:06:28.912 "nvmf_subsystem_add_ns", 00:06:28.912 "nvmf_subsystem_listener_set_ana_state", 00:06:28.912 "nvmf_discovery_get_referrals", 00:06:28.912 "nvmf_discovery_remove_referral", 00:06:28.912 "nvmf_discovery_add_referral", 00:06:28.912 "nvmf_subsystem_remove_listener", 00:06:28.912 "nvmf_subsystem_add_listener", 00:06:28.912 "nvmf_delete_subsystem", 00:06:28.912 "nvmf_create_subsystem", 00:06:28.912 "nvmf_get_subsystems", 00:06:28.912 "env_dpdk_get_mem_stats", 00:06:28.912 "nbd_get_disks", 00:06:28.912 "nbd_stop_disk", 00:06:28.912 "nbd_start_disk", 00:06:28.912 "ublk_recover_disk", 00:06:28.912 "ublk_get_disks", 00:06:28.912 "ublk_stop_disk", 00:06:28.912 "ublk_start_disk", 00:06:28.912 "ublk_destroy_target", 00:06:28.912 "ublk_create_target", 00:06:28.912 "virtio_blk_create_transport", 00:06:28.912 "virtio_blk_get_transports", 00:06:28.912 "vhost_controller_set_coalescing", 00:06:28.912 "vhost_get_controllers", 00:06:28.912 "vhost_delete_controller", 00:06:28.912 "vhost_create_blk_controller", 00:06:28.912 "vhost_scsi_controller_remove_target", 00:06:28.912 "vhost_scsi_controller_add_target", 00:06:28.912 "vhost_start_scsi_controller", 00:06:28.912 "vhost_create_scsi_controller", 00:06:28.912 "thread_set_cpumask", 00:06:28.912 "framework_get_governor", 00:06:28.912 "framework_get_scheduler", 00:06:28.912 "framework_set_scheduler", 00:06:28.912 "framework_get_reactors", 00:06:28.912 "thread_get_io_channels", 00:06:28.912 "thread_get_pollers", 00:06:28.912 "thread_get_stats", 00:06:28.912 "framework_monitor_context_switch", 00:06:28.912 "spdk_kill_instance", 00:06:28.912 "log_enable_timestamps", 00:06:28.912 "log_get_flags", 00:06:28.912 "log_clear_flag", 00:06:28.912 "log_set_flag", 00:06:28.912 "log_get_level", 00:06:28.912 "log_set_level", 00:06:28.912 "log_get_print_level", 00:06:28.912 "log_set_print_level", 00:06:28.912 "framework_enable_cpumask_locks", 00:06:28.912 "framework_disable_cpumask_locks", 00:06:28.912 "framework_wait_init", 00:06:28.912 "framework_start_init", 00:06:28.912 "scsi_get_devices", 00:06:28.912 "bdev_get_histogram", 00:06:28.912 "bdev_enable_histogram", 00:06:28.912 "bdev_set_qos_limit", 00:06:28.912 "bdev_set_qd_sampling_period", 00:06:28.912 "bdev_get_bdevs", 00:06:28.912 "bdev_reset_iostat", 00:06:28.912 "bdev_get_iostat", 00:06:28.912 "bdev_examine", 00:06:28.912 "bdev_wait_for_examine", 00:06:28.912 "bdev_set_options", 00:06:28.912 "notify_get_notifications", 00:06:28.912 "notify_get_types", 00:06:28.912 "accel_get_stats", 00:06:28.912 "accel_set_options", 00:06:28.912 "accel_set_driver", 00:06:28.912 "accel_crypto_key_destroy", 00:06:28.912 "accel_crypto_keys_get", 00:06:28.912 "accel_crypto_key_create", 00:06:28.912 "accel_assign_opc", 00:06:28.912 "accel_get_module_info", 00:06:28.912 "accel_get_opc_assignments", 00:06:28.912 "vmd_rescan", 00:06:28.912 "vmd_remove_device", 00:06:28.912 "vmd_enable", 00:06:28.912 "sock_get_default_impl", 00:06:28.912 "sock_set_default_impl", 00:06:28.912 "sock_impl_set_options", 00:06:28.912 "sock_impl_get_options", 00:06:28.912 "iobuf_get_stats", 00:06:28.912 "iobuf_set_options", 00:06:28.912 "keyring_get_keys", 00:06:28.912 "framework_get_pci_devices", 00:06:28.912 "framework_get_config", 00:06:28.912 "framework_get_subsystems", 00:06:28.912 "vfu_tgt_set_base_path", 00:06:28.912 "trace_get_info", 00:06:28.912 "trace_get_tpoint_group_mask", 00:06:28.912 "trace_disable_tpoint_group", 00:06:28.912 "trace_enable_tpoint_group", 00:06:28.912 "trace_clear_tpoint_mask", 00:06:28.912 "trace_set_tpoint_mask", 00:06:28.912 "spdk_get_version", 00:06:28.912 "rpc_get_methods" 00:06:28.912 ] 00:06:28.912 09:16:13 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:28.912 09:16:13 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:28.912 09:16:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:28.912 09:16:13 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:28.912 09:16:13 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 611612 00:06:28.912 09:16:13 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 611612 ']' 00:06:28.912 09:16:13 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 611612 00:06:28.912 09:16:13 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:28.912 09:16:13 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:28.912 09:16:13 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 611612 00:06:28.912 09:16:13 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:28.912 09:16:13 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:28.912 09:16:13 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 611612' 00:06:28.912 killing process with pid 611612 00:06:28.912 09:16:13 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 611612 00:06:28.912 09:16:13 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 611612 00:06:29.478 00:06:29.478 real 0m1.210s 00:06:29.478 user 0m2.147s 00:06:29.478 sys 0m0.434s 00:06:29.478 09:16:13 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.478 09:16:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:29.478 ************************************ 00:06:29.478 END TEST spdkcli_tcp 00:06:29.478 ************************************ 00:06:29.478 09:16:13 -- common/autotest_common.sh@1142 -- # return 0 00:06:29.478 09:16:13 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:29.478 09:16:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:29.478 09:16:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.478 09:16:13 -- common/autotest_common.sh@10 -- # set +x 00:06:29.478 ************************************ 00:06:29.478 START TEST dpdk_mem_utility 00:06:29.478 ************************************ 00:06:29.478 09:16:13 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:29.478 * Looking for test storage... 00:06:29.478 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:29.478 09:16:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:29.478 09:16:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=611820 00:06:29.478 09:16:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:29.478 09:16:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 611820 00:06:29.478 09:16:13 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 611820 ']' 00:06:29.478 09:16:13 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.478 09:16:13 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.478 09:16:13 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.478 09:16:13 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.478 09:16:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:29.478 [2024-07-14 09:16:13.819627] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:29.478 [2024-07-14 09:16:13.819721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid611820 ] 00:06:29.478 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.478 [2024-07-14 09:16:13.875795] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.735 [2024-07-14 09:16:13.960736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.994 09:16:14 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.994 09:16:14 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:29.994 09:16:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:29.994 09:16:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:29.994 09:16:14 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.994 09:16:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:29.994 { 00:06:29.994 "filename": "/tmp/spdk_mem_dump.txt" 00:06:29.994 } 00:06:29.994 09:16:14 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.994 09:16:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:29.994 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:29.994 1 heaps totaling size 814.000000 MiB 00:06:29.994 size: 814.000000 MiB heap id: 0 00:06:29.994 end heaps---------- 00:06:29.994 8 mempools totaling size 598.116089 MiB 00:06:29.994 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:29.994 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:29.994 size: 84.521057 MiB name: bdev_io_611820 00:06:29.994 size: 51.011292 MiB name: evtpool_611820 00:06:29.994 size: 50.003479 MiB name: msgpool_611820 00:06:29.994 size: 21.763794 MiB name: PDU_Pool 00:06:29.994 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:29.994 size: 0.026123 MiB name: Session_Pool 00:06:29.994 end mempools------- 00:06:29.994 6 memzones totaling size 4.142822 MiB 00:06:29.994 size: 1.000366 MiB name: RG_ring_0_611820 00:06:29.994 size: 1.000366 MiB name: RG_ring_1_611820 00:06:29.994 size: 1.000366 MiB name: RG_ring_4_611820 00:06:29.994 size: 1.000366 MiB name: RG_ring_5_611820 00:06:29.994 size: 0.125366 MiB name: RG_ring_2_611820 00:06:29.994 size: 0.015991 MiB name: RG_ring_3_611820 00:06:29.994 end memzones------- 00:06:29.994 09:16:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:29.994 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:29.994 list of free elements. size: 12.519348 MiB 00:06:29.994 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:29.994 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:29.994 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:29.994 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:29.994 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:29.994 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:29.994 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:29.994 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:29.994 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:29.994 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:29.994 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:29.994 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:29.994 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:29.994 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:29.994 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:29.994 list of standard malloc elements. size: 199.218079 MiB 00:06:29.994 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:29.994 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:29.994 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:29.994 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:29.994 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:29.994 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:29.994 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:29.994 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:29.994 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:29.994 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:29.994 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:29.994 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:29.994 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:29.994 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:29.994 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:29.994 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:29.994 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:29.994 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:29.994 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:29.994 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:29.994 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:29.994 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:29.994 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:29.994 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:29.994 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:29.994 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:29.994 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:29.994 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:29.994 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:29.994 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:29.994 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:29.994 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:29.994 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:29.994 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:29.994 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:29.994 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:29.994 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:29.994 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:29.994 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:29.994 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:29.994 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:29.994 list of memzone associated elements. size: 602.262573 MiB 00:06:29.994 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:29.994 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:29.994 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:29.994 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:29.994 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:29.994 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_611820_0 00:06:29.994 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:29.994 associated memzone info: size: 48.002930 MiB name: MP_evtpool_611820_0 00:06:29.994 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:29.994 associated memzone info: size: 48.002930 MiB name: MP_msgpool_611820_0 00:06:29.994 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:29.994 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:29.994 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:29.994 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:29.994 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:29.994 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_611820 00:06:29.994 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:29.994 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_611820 00:06:29.994 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:29.994 associated memzone info: size: 1.007996 MiB name: MP_evtpool_611820 00:06:29.994 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:29.994 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:29.994 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:29.994 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:29.994 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:29.994 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:29.994 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:29.994 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:29.994 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:29.994 associated memzone info: size: 1.000366 MiB name: RG_ring_0_611820 00:06:29.994 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:29.994 associated memzone info: size: 1.000366 MiB name: RG_ring_1_611820 00:06:29.994 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:29.994 associated memzone info: size: 1.000366 MiB name: RG_ring_4_611820 00:06:29.994 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:29.994 associated memzone info: size: 1.000366 MiB name: RG_ring_5_611820 00:06:29.994 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:29.994 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_611820 00:06:29.994 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:29.994 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:29.994 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:29.994 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:29.994 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:29.994 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:29.994 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:29.994 associated memzone info: size: 0.125366 MiB name: RG_ring_2_611820 00:06:29.994 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:29.994 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:29.994 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:29.994 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:29.994 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:29.994 associated memzone info: size: 0.015991 MiB name: RG_ring_3_611820 00:06:29.994 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:29.994 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:29.994 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:29.994 associated memzone info: size: 0.000183 MiB name: MP_msgpool_611820 00:06:29.994 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:29.994 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_611820 00:06:29.994 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:29.994 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:29.994 09:16:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:29.994 09:16:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 611820 00:06:29.994 09:16:14 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 611820 ']' 00:06:29.994 09:16:14 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 611820 00:06:29.994 09:16:14 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:29.994 09:16:14 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:29.994 09:16:14 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 611820 00:06:29.994 09:16:14 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:29.994 09:16:14 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:29.994 09:16:14 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 611820' 00:06:29.994 killing process with pid 611820 00:06:29.994 09:16:14 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 611820 00:06:29.994 09:16:14 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 611820 00:06:30.561 00:06:30.561 real 0m1.029s 00:06:30.561 user 0m0.997s 00:06:30.561 sys 0m0.405s 00:06:30.561 09:16:14 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.561 09:16:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:30.561 ************************************ 00:06:30.561 END TEST dpdk_mem_utility 00:06:30.561 ************************************ 00:06:30.561 09:16:14 -- common/autotest_common.sh@1142 -- # return 0 00:06:30.561 09:16:14 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:30.561 09:16:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:30.561 09:16:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.561 09:16:14 -- common/autotest_common.sh@10 -- # set +x 00:06:30.561 ************************************ 00:06:30.561 START TEST event 00:06:30.561 ************************************ 00:06:30.561 09:16:14 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:30.561 * Looking for test storage... 00:06:30.561 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:30.561 09:16:14 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:30.561 09:16:14 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:30.561 09:16:14 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:30.561 09:16:14 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:30.561 09:16:14 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.561 09:16:14 event -- common/autotest_common.sh@10 -- # set +x 00:06:30.561 ************************************ 00:06:30.561 START TEST event_perf 00:06:30.561 ************************************ 00:06:30.561 09:16:14 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:30.561 Running I/O for 1 seconds...[2024-07-14 09:16:14.884098] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:30.561 [2024-07-14 09:16:14.884162] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid612008 ] 00:06:30.561 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.561 [2024-07-14 09:16:14.946750] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:30.818 [2024-07-14 09:16:15.040515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.818 [2024-07-14 09:16:15.040582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.818 [2024-07-14 09:16:15.040672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:30.818 [2024-07-14 09:16:15.040675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.767 Running I/O for 1 seconds... 00:06:31.767 lcore 0: 235847 00:06:31.767 lcore 1: 235847 00:06:31.767 lcore 2: 235848 00:06:31.767 lcore 3: 235846 00:06:31.767 done. 00:06:31.767 00:06:31.767 real 0m1.253s 00:06:31.767 user 0m4.162s 00:06:31.767 sys 0m0.086s 00:06:31.767 09:16:16 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.767 09:16:16 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:31.767 ************************************ 00:06:31.767 END TEST event_perf 00:06:31.767 ************************************ 00:06:31.767 09:16:16 event -- common/autotest_common.sh@1142 -- # return 0 00:06:31.767 09:16:16 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:31.767 09:16:16 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:31.767 09:16:16 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.767 09:16:16 event -- common/autotest_common.sh@10 -- # set +x 00:06:31.767 ************************************ 00:06:31.767 START TEST event_reactor 00:06:31.767 ************************************ 00:06:31.767 09:16:16 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:31.767 [2024-07-14 09:16:16.183833] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:31.768 [2024-07-14 09:16:16.183922] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid612165 ] 00:06:32.072 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.072 [2024-07-14 09:16:16.251193] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.072 [2024-07-14 09:16:16.344473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.006 test_start 00:06:33.006 oneshot 00:06:33.006 tick 100 00:06:33.006 tick 100 00:06:33.006 tick 250 00:06:33.006 tick 100 00:06:33.006 tick 100 00:06:33.006 tick 100 00:06:33.006 tick 250 00:06:33.006 tick 500 00:06:33.006 tick 100 00:06:33.006 tick 100 00:06:33.006 tick 250 00:06:33.006 tick 100 00:06:33.006 tick 100 00:06:33.006 test_end 00:06:33.006 00:06:33.006 real 0m1.253s 00:06:33.006 user 0m1.160s 00:06:33.006 sys 0m0.088s 00:06:33.006 09:16:17 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.006 09:16:17 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:33.006 ************************************ 00:06:33.006 END TEST event_reactor 00:06:33.006 ************************************ 00:06:33.006 09:16:17 event -- common/autotest_common.sh@1142 -- # return 0 00:06:33.006 09:16:17 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:33.006 09:16:17 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:33.006 09:16:17 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.006 09:16:17 event -- common/autotest_common.sh@10 -- # set +x 00:06:33.264 ************************************ 00:06:33.264 START TEST event_reactor_perf 00:06:33.264 ************************************ 00:06:33.264 09:16:17 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:33.264 [2024-07-14 09:16:17.483325] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:33.264 [2024-07-14 09:16:17.483403] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid612322 ] 00:06:33.264 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.264 [2024-07-14 09:16:17.545254] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.264 [2024-07-14 09:16:17.636934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.639 test_start 00:06:34.639 test_end 00:06:34.639 Performance: 356430 events per second 00:06:34.639 00:06:34.639 real 0m1.245s 00:06:34.639 user 0m1.151s 00:06:34.639 sys 0m0.088s 00:06:34.639 09:16:18 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.639 09:16:18 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:34.639 ************************************ 00:06:34.639 END TEST event_reactor_perf 00:06:34.639 ************************************ 00:06:34.639 09:16:18 event -- common/autotest_common.sh@1142 -- # return 0 00:06:34.639 09:16:18 event -- event/event.sh@49 -- # uname -s 00:06:34.639 09:16:18 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:34.639 09:16:18 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:34.639 09:16:18 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:34.639 09:16:18 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.639 09:16:18 event -- common/autotest_common.sh@10 -- # set +x 00:06:34.639 ************************************ 00:06:34.639 START TEST event_scheduler 00:06:34.639 ************************************ 00:06:34.639 09:16:18 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:34.639 * Looking for test storage... 00:06:34.639 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:34.639 09:16:18 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:34.639 09:16:18 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=612516 00:06:34.639 09:16:18 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:34.639 09:16:18 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:34.639 09:16:18 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 612516 00:06:34.639 09:16:18 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 612516 ']' 00:06:34.639 09:16:18 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.639 09:16:18 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:34.639 09:16:18 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.639 09:16:18 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:34.639 09:16:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:34.639 [2024-07-14 09:16:18.848385] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:34.639 [2024-07-14 09:16:18.848477] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid612516 ] 00:06:34.639 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.639 [2024-07-14 09:16:18.905459] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:34.639 [2024-07-14 09:16:18.992326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.639 [2024-07-14 09:16:18.992390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.639 [2024-07-14 09:16:18.992456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:34.639 [2024-07-14 09:16:18.992458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.639 09:16:19 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:34.639 09:16:19 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:34.639 09:16:19 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:34.639 09:16:19 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.639 09:16:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:34.639 [2024-07-14 09:16:19.049243] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:34.639 [2024-07-14 09:16:19.049268] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:34.639 [2024-07-14 09:16:19.049300] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:34.639 [2024-07-14 09:16:19.049313] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:34.639 [2024-07-14 09:16:19.049323] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:34.639 09:16:19 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.639 09:16:19 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:34.639 09:16:19 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.639 09:16:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:34.898 [2024-07-14 09:16:19.140560] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:34.898 09:16:19 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.898 09:16:19 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:34.898 09:16:19 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:34.898 09:16:19 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.898 09:16:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:34.898 ************************************ 00:06:34.898 START TEST scheduler_create_thread 00:06:34.898 ************************************ 00:06:34.898 09:16:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.899 2 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.899 3 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.899 4 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.899 5 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.899 6 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.899 7 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.899 8 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.899 9 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.899 10 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.899 09:16:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.273 09:16:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.273 00:06:36.273 real 0m1.173s 00:06:36.273 user 0m0.013s 00:06:36.273 sys 0m0.001s 00:06:36.273 09:16:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.273 09:16:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.273 ************************************ 00:06:36.273 END TEST scheduler_create_thread 00:06:36.274 ************************************ 00:06:36.274 09:16:20 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:36.274 09:16:20 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:36.274 09:16:20 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 612516 00:06:36.274 09:16:20 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 612516 ']' 00:06:36.274 09:16:20 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 612516 00:06:36.274 09:16:20 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:36.274 09:16:20 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:36.274 09:16:20 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 612516 00:06:36.274 09:16:20 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:36.274 09:16:20 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:36.274 09:16:20 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 612516' 00:06:36.274 killing process with pid 612516 00:06:36.274 09:16:20 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 612516 00:06:36.274 09:16:20 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 612516 00:06:36.532 [2024-07-14 09:16:20.818825] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:36.789 00:06:36.789 real 0m2.267s 00:06:36.789 user 0m2.614s 00:06:36.789 sys 0m0.318s 00:06:36.789 09:16:21 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.789 09:16:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:36.789 ************************************ 00:06:36.789 END TEST event_scheduler 00:06:36.789 ************************************ 00:06:36.789 09:16:21 event -- common/autotest_common.sh@1142 -- # return 0 00:06:36.789 09:16:21 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:36.789 09:16:21 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:36.789 09:16:21 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:36.789 09:16:21 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.789 09:16:21 event -- common/autotest_common.sh@10 -- # set +x 00:06:36.789 ************************************ 00:06:36.789 START TEST app_repeat 00:06:36.789 ************************************ 00:06:36.789 09:16:21 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:36.789 09:16:21 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.789 09:16:21 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.789 09:16:21 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:36.789 09:16:21 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:36.789 09:16:21 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:36.789 09:16:21 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:36.789 09:16:21 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:36.789 09:16:21 event.app_repeat -- event/event.sh@19 -- # repeat_pid=612830 00:06:36.789 09:16:21 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:36.789 09:16:21 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:36.789 09:16:21 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 612830' 00:06:36.789 Process app_repeat pid: 612830 00:06:36.789 09:16:21 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:36.789 09:16:21 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:36.789 spdk_app_start Round 0 00:06:36.789 09:16:21 event.app_repeat -- event/event.sh@25 -- # waitforlisten 612830 /var/tmp/spdk-nbd.sock 00:06:36.789 09:16:21 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 612830 ']' 00:06:36.789 09:16:21 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:36.789 09:16:21 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:36.789 09:16:21 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:36.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:36.789 09:16:21 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:36.789 09:16:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:36.789 [2024-07-14 09:16:21.105128] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:36.789 [2024-07-14 09:16:21.105193] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid612830 ] 00:06:36.789 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.789 [2024-07-14 09:16:21.168760] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:37.046 [2024-07-14 09:16:21.259614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.046 [2024-07-14 09:16:21.259620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.046 09:16:21 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.046 09:16:21 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:37.046 09:16:21 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:37.303 Malloc0 00:06:37.303 09:16:21 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:37.561 Malloc1 00:06:37.561 09:16:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:37.561 09:16:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.561 09:16:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:37.561 09:16:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:37.561 09:16:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.561 09:16:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:37.561 09:16:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:37.561 09:16:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.561 09:16:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:37.561 09:16:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:37.561 09:16:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.561 09:16:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:37.561 09:16:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:37.561 09:16:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:37.561 09:16:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.561 09:16:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:37.818 /dev/nbd0 00:06:37.818 09:16:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:37.818 09:16:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:37.818 09:16:22 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:37.818 09:16:22 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:37.818 09:16:22 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:37.818 09:16:22 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:37.818 09:16:22 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:37.818 09:16:22 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:37.818 09:16:22 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:37.818 09:16:22 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:37.818 09:16:22 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:37.818 1+0 records in 00:06:37.818 1+0 records out 00:06:37.818 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000149255 s, 27.4 MB/s 00:06:37.818 09:16:22 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:37.818 09:16:22 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:37.818 09:16:22 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:37.818 09:16:22 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:37.818 09:16:22 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:37.818 09:16:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.818 09:16:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.818 09:16:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:38.076 /dev/nbd1 00:06:38.076 09:16:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:38.076 09:16:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:38.076 09:16:22 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:38.076 09:16:22 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:38.076 09:16:22 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:38.076 09:16:22 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:38.076 09:16:22 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:38.076 09:16:22 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:38.076 09:16:22 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:38.076 09:16:22 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:38.076 09:16:22 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:38.076 1+0 records in 00:06:38.076 1+0 records out 00:06:38.076 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000166056 s, 24.7 MB/s 00:06:38.076 09:16:22 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:38.076 09:16:22 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:38.076 09:16:22 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:38.076 09:16:22 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:38.076 09:16:22 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:38.076 09:16:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:38.076 09:16:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:38.076 09:16:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:38.076 09:16:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.076 09:16:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:38.334 09:16:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:38.334 { 00:06:38.334 "nbd_device": "/dev/nbd0", 00:06:38.334 "bdev_name": "Malloc0" 00:06:38.334 }, 00:06:38.334 { 00:06:38.334 "nbd_device": "/dev/nbd1", 00:06:38.334 "bdev_name": "Malloc1" 00:06:38.334 } 00:06:38.334 ]' 00:06:38.334 09:16:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:38.334 { 00:06:38.334 "nbd_device": "/dev/nbd0", 00:06:38.334 "bdev_name": "Malloc0" 00:06:38.334 }, 00:06:38.334 { 00:06:38.334 "nbd_device": "/dev/nbd1", 00:06:38.334 "bdev_name": "Malloc1" 00:06:38.334 } 00:06:38.334 ]' 00:06:38.334 09:16:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:38.334 09:16:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:38.334 /dev/nbd1' 00:06:38.334 09:16:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:38.334 /dev/nbd1' 00:06:38.334 09:16:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:38.334 09:16:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:38.334 09:16:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:38.334 09:16:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:38.334 09:16:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:38.334 09:16:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:38.334 09:16:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.334 09:16:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:38.334 09:16:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:38.334 09:16:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:38.334 09:16:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:38.334 09:16:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:38.334 256+0 records in 00:06:38.334 256+0 records out 00:06:38.334 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00500701 s, 209 MB/s 00:06:38.334 09:16:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:38.334 09:16:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:38.334 256+0 records in 00:06:38.334 256+0 records out 00:06:38.335 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206352 s, 50.8 MB/s 00:06:38.335 09:16:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:38.335 09:16:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:38.592 256+0 records in 00:06:38.593 256+0 records out 00:06:38.593 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0231122 s, 45.4 MB/s 00:06:38.593 09:16:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:38.593 09:16:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.593 09:16:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:38.593 09:16:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:38.593 09:16:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:38.593 09:16:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:38.593 09:16:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:38.593 09:16:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:38.593 09:16:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:38.593 09:16:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:38.593 09:16:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:38.593 09:16:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:38.593 09:16:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:38.593 09:16:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.593 09:16:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.593 09:16:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:38.593 09:16:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:38.593 09:16:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.593 09:16:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:38.851 09:16:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:38.851 09:16:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:38.851 09:16:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:38.851 09:16:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.851 09:16:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.851 09:16:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:38.851 09:16:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:38.851 09:16:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.851 09:16:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.851 09:16:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:39.108 09:16:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:39.108 09:16:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:39.108 09:16:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:39.108 09:16:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.108 09:16:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.108 09:16:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:39.108 09:16:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:39.108 09:16:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.108 09:16:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:39.108 09:16:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.108 09:16:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:39.364 09:16:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:39.364 09:16:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:39.364 09:16:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:39.364 09:16:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:39.364 09:16:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:39.364 09:16:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:39.364 09:16:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:39.364 09:16:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:39.364 09:16:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:39.364 09:16:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:39.364 09:16:23 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:39.364 09:16:23 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:39.364 09:16:23 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:39.623 09:16:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:39.881 [2024-07-14 09:16:24.146521] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:39.881 [2024-07-14 09:16:24.236554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.881 [2024-07-14 09:16:24.236554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.881 [2024-07-14 09:16:24.293431] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:39.881 [2024-07-14 09:16:24.293499] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:43.161 09:16:26 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:43.161 09:16:26 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:43.161 spdk_app_start Round 1 00:06:43.161 09:16:26 event.app_repeat -- event/event.sh@25 -- # waitforlisten 612830 /var/tmp/spdk-nbd.sock 00:06:43.161 09:16:26 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 612830 ']' 00:06:43.161 09:16:26 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:43.161 09:16:26 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:43.161 09:16:26 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:43.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:43.161 09:16:26 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:43.161 09:16:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:43.161 09:16:27 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:43.161 09:16:27 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:43.161 09:16:27 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:43.161 Malloc0 00:06:43.161 09:16:27 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:43.422 Malloc1 00:06:43.422 09:16:27 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:43.422 09:16:27 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.422 09:16:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:43.422 09:16:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:43.422 09:16:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.422 09:16:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:43.422 09:16:27 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:43.422 09:16:27 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.422 09:16:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:43.422 09:16:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:43.422 09:16:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.422 09:16:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:43.422 09:16:27 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:43.422 09:16:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:43.422 09:16:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:43.422 09:16:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:43.679 /dev/nbd0 00:06:43.679 09:16:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:43.679 09:16:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:43.679 09:16:27 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:43.679 09:16:27 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:43.679 09:16:27 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:43.679 09:16:27 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:43.679 09:16:27 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:43.679 09:16:27 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:43.679 09:16:27 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:43.679 09:16:27 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:43.679 09:16:27 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:43.679 1+0 records in 00:06:43.679 1+0 records out 00:06:43.679 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000201364 s, 20.3 MB/s 00:06:43.679 09:16:27 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:43.679 09:16:27 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:43.680 09:16:27 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:43.680 09:16:27 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:43.680 09:16:27 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:43.680 09:16:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:43.680 09:16:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:43.680 09:16:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:43.937 /dev/nbd1 00:06:43.937 09:16:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:43.937 09:16:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:43.937 09:16:28 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:43.937 09:16:28 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:43.937 09:16:28 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:43.937 09:16:28 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:43.937 09:16:28 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:43.937 09:16:28 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:43.937 09:16:28 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:43.937 09:16:28 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:43.937 09:16:28 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:43.937 1+0 records in 00:06:43.937 1+0 records out 00:06:43.937 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217942 s, 18.8 MB/s 00:06:43.937 09:16:28 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:43.937 09:16:28 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:43.937 09:16:28 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:43.937 09:16:28 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:43.937 09:16:28 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:43.937 09:16:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:43.937 09:16:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:43.937 09:16:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:43.937 09:16:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.937 09:16:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:44.195 09:16:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:44.195 { 00:06:44.195 "nbd_device": "/dev/nbd0", 00:06:44.195 "bdev_name": "Malloc0" 00:06:44.195 }, 00:06:44.195 { 00:06:44.195 "nbd_device": "/dev/nbd1", 00:06:44.195 "bdev_name": "Malloc1" 00:06:44.195 } 00:06:44.195 ]' 00:06:44.195 09:16:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:44.195 { 00:06:44.195 "nbd_device": "/dev/nbd0", 00:06:44.195 "bdev_name": "Malloc0" 00:06:44.195 }, 00:06:44.195 { 00:06:44.195 "nbd_device": "/dev/nbd1", 00:06:44.195 "bdev_name": "Malloc1" 00:06:44.195 } 00:06:44.195 ]' 00:06:44.195 09:16:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:44.195 09:16:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:44.195 /dev/nbd1' 00:06:44.195 09:16:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:44.195 /dev/nbd1' 00:06:44.195 09:16:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:44.195 09:16:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:44.195 09:16:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:44.196 09:16:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:44.196 09:16:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:44.196 09:16:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:44.196 09:16:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.196 09:16:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:44.196 09:16:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:44.196 09:16:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:44.196 09:16:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:44.196 09:16:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:44.196 256+0 records in 00:06:44.196 256+0 records out 00:06:44.196 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00502136 s, 209 MB/s 00:06:44.196 09:16:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:44.196 09:16:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:44.196 256+0 records in 00:06:44.196 256+0 records out 00:06:44.196 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202464 s, 51.8 MB/s 00:06:44.196 09:16:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:44.196 09:16:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:44.196 256+0 records in 00:06:44.196 256+0 records out 00:06:44.196 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0223141 s, 47.0 MB/s 00:06:44.196 09:16:28 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:44.196 09:16:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.196 09:16:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:44.196 09:16:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:44.196 09:16:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:44.196 09:16:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:44.196 09:16:28 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:44.196 09:16:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:44.196 09:16:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:44.196 09:16:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:44.196 09:16:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:44.196 09:16:28 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:44.196 09:16:28 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:44.196 09:16:28 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.196 09:16:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.196 09:16:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:44.196 09:16:28 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:44.196 09:16:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.196 09:16:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:44.454 09:16:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:44.454 09:16:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:44.454 09:16:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:44.454 09:16:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.454 09:16:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.454 09:16:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:44.454 09:16:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:44.454 09:16:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.454 09:16:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.454 09:16:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:44.713 09:16:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:44.713 09:16:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:44.713 09:16:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:44.713 09:16:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.713 09:16:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.713 09:16:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:44.713 09:16:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:44.713 09:16:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.713 09:16:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:44.713 09:16:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.713 09:16:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:44.971 09:16:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:44.971 09:16:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:44.971 09:16:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:45.229 09:16:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:45.229 09:16:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:45.229 09:16:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:45.229 09:16:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:45.229 09:16:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:45.229 09:16:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:45.229 09:16:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:45.229 09:16:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:45.229 09:16:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:45.229 09:16:29 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:45.488 09:16:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:45.488 [2024-07-14 09:16:29.936272] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:45.746 [2024-07-14 09:16:30.032495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.746 [2024-07-14 09:16:30.032499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.746 [2024-07-14 09:16:30.094299] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:45.746 [2024-07-14 09:16:30.094370] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:48.302 09:16:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:48.302 09:16:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:48.302 spdk_app_start Round 2 00:06:48.302 09:16:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 612830 /var/tmp/spdk-nbd.sock 00:06:48.302 09:16:32 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 612830 ']' 00:06:48.302 09:16:32 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:48.302 09:16:32 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.302 09:16:32 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:48.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:48.302 09:16:32 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.302 09:16:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:48.560 09:16:32 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.560 09:16:32 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:48.560 09:16:32 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:48.819 Malloc0 00:06:48.819 09:16:33 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:49.076 Malloc1 00:06:49.076 09:16:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:49.076 09:16:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.076 09:16:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:49.076 09:16:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:49.076 09:16:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.076 09:16:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:49.076 09:16:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:49.076 09:16:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.076 09:16:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:49.076 09:16:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:49.076 09:16:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.076 09:16:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:49.076 09:16:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:49.076 09:16:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:49.076 09:16:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.076 09:16:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:49.332 /dev/nbd0 00:06:49.332 09:16:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:49.332 09:16:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:49.332 09:16:33 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:49.332 09:16:33 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:49.332 09:16:33 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:49.332 09:16:33 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:49.332 09:16:33 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:49.332 09:16:33 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:49.332 09:16:33 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:49.332 09:16:33 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:49.332 09:16:33 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:49.332 1+0 records in 00:06:49.332 1+0 records out 00:06:49.332 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000204125 s, 20.1 MB/s 00:06:49.332 09:16:33 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:49.332 09:16:33 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:49.332 09:16:33 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:49.332 09:16:33 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:49.332 09:16:33 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:49.332 09:16:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.332 09:16:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.332 09:16:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:49.588 /dev/nbd1 00:06:49.588 09:16:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:49.588 09:16:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:49.588 09:16:34 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:49.588 09:16:34 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:49.588 09:16:34 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:49.588 09:16:34 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:49.588 09:16:34 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:49.588 09:16:34 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:49.588 09:16:34 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:49.588 09:16:34 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:49.588 09:16:34 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:49.588 1+0 records in 00:06:49.588 1+0 records out 00:06:49.588 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227491 s, 18.0 MB/s 00:06:49.588 09:16:34 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:49.588 09:16:34 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:49.588 09:16:34 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:49.846 09:16:34 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:49.846 09:16:34 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:49.846 09:16:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.846 09:16:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.846 09:16:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:49.846 09:16:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.846 09:16:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:49.846 09:16:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:49.846 { 00:06:49.846 "nbd_device": "/dev/nbd0", 00:06:49.846 "bdev_name": "Malloc0" 00:06:49.846 }, 00:06:49.846 { 00:06:49.846 "nbd_device": "/dev/nbd1", 00:06:49.846 "bdev_name": "Malloc1" 00:06:49.846 } 00:06:49.846 ]' 00:06:49.846 09:16:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:49.846 { 00:06:49.846 "nbd_device": "/dev/nbd0", 00:06:49.846 "bdev_name": "Malloc0" 00:06:49.846 }, 00:06:49.846 { 00:06:49.846 "nbd_device": "/dev/nbd1", 00:06:49.846 "bdev_name": "Malloc1" 00:06:49.846 } 00:06:49.846 ]' 00:06:49.846 09:16:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:50.104 09:16:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:50.104 /dev/nbd1' 00:06:50.104 09:16:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:50.104 /dev/nbd1' 00:06:50.104 09:16:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:50.104 09:16:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:50.104 09:16:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:50.104 09:16:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:50.104 09:16:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:50.104 09:16:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:50.104 09:16:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.104 09:16:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:50.104 09:16:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:50.104 09:16:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:50.104 09:16:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:50.104 09:16:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:50.104 256+0 records in 00:06:50.104 256+0 records out 00:06:50.104 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00398816 s, 263 MB/s 00:06:50.104 09:16:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.104 09:16:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:50.104 256+0 records in 00:06:50.104 256+0 records out 00:06:50.104 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0238118 s, 44.0 MB/s 00:06:50.104 09:16:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.104 09:16:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:50.104 256+0 records in 00:06:50.104 256+0 records out 00:06:50.104 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.022899 s, 45.8 MB/s 00:06:50.104 09:16:34 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:50.104 09:16:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.104 09:16:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:50.104 09:16:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:50.104 09:16:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:50.104 09:16:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:50.104 09:16:34 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:50.104 09:16:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:50.104 09:16:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:50.104 09:16:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:50.104 09:16:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:50.104 09:16:34 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:50.104 09:16:34 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:50.104 09:16:34 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.104 09:16:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.104 09:16:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:50.104 09:16:34 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:50.104 09:16:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.104 09:16:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:50.362 09:16:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:50.362 09:16:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:50.362 09:16:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:50.362 09:16:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.362 09:16:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.362 09:16:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:50.362 09:16:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:50.362 09:16:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.362 09:16:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.362 09:16:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:50.620 09:16:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:50.620 09:16:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:50.620 09:16:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:50.620 09:16:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.620 09:16:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.620 09:16:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:50.620 09:16:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:50.620 09:16:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.620 09:16:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:50.620 09:16:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.620 09:16:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:50.878 09:16:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:50.878 09:16:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:50.878 09:16:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:50.878 09:16:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:50.878 09:16:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:50.878 09:16:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:50.878 09:16:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:50.878 09:16:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:50.878 09:16:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:50.878 09:16:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:50.878 09:16:35 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:50.878 09:16:35 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:50.878 09:16:35 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:51.136 09:16:35 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:51.394 [2024-07-14 09:16:35.733763] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:51.394 [2024-07-14 09:16:35.822485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.394 [2024-07-14 09:16:35.822490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.652 [2024-07-14 09:16:35.880955] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:51.652 [2024-07-14 09:16:35.881020] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:54.176 09:16:38 event.app_repeat -- event/event.sh@38 -- # waitforlisten 612830 /var/tmp/spdk-nbd.sock 00:06:54.176 09:16:38 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 612830 ']' 00:06:54.176 09:16:38 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:54.176 09:16:38 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:54.176 09:16:38 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:54.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:54.176 09:16:38 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:54.176 09:16:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:54.434 09:16:38 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:54.434 09:16:38 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:54.434 09:16:38 event.app_repeat -- event/event.sh@39 -- # killprocess 612830 00:06:54.434 09:16:38 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 612830 ']' 00:06:54.434 09:16:38 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 612830 00:06:54.434 09:16:38 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:54.434 09:16:38 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:54.434 09:16:38 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 612830 00:06:54.434 09:16:38 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:54.434 09:16:38 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:54.434 09:16:38 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 612830' 00:06:54.434 killing process with pid 612830 00:06:54.434 09:16:38 event.app_repeat -- common/autotest_common.sh@967 -- # kill 612830 00:06:54.434 09:16:38 event.app_repeat -- common/autotest_common.sh@972 -- # wait 612830 00:06:54.692 spdk_app_start is called in Round 0. 00:06:54.692 Shutdown signal received, stop current app iteration 00:06:54.692 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 reinitialization... 00:06:54.692 spdk_app_start is called in Round 1. 00:06:54.692 Shutdown signal received, stop current app iteration 00:06:54.692 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 reinitialization... 00:06:54.692 spdk_app_start is called in Round 2. 00:06:54.692 Shutdown signal received, stop current app iteration 00:06:54.692 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 reinitialization... 00:06:54.692 spdk_app_start is called in Round 3. 00:06:54.692 Shutdown signal received, stop current app iteration 00:06:54.692 09:16:38 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:54.692 09:16:38 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:54.692 00:06:54.692 real 0m17.892s 00:06:54.692 user 0m38.982s 00:06:54.692 sys 0m3.151s 00:06:54.692 09:16:38 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.692 09:16:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:54.692 ************************************ 00:06:54.692 END TEST app_repeat 00:06:54.692 ************************************ 00:06:54.692 09:16:38 event -- common/autotest_common.sh@1142 -- # return 0 00:06:54.692 09:16:38 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:54.692 09:16:38 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:54.692 09:16:38 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:54.692 09:16:38 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.692 09:16:38 event -- common/autotest_common.sh@10 -- # set +x 00:06:54.692 ************************************ 00:06:54.692 START TEST cpu_locks 00:06:54.692 ************************************ 00:06:54.692 09:16:39 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:54.692 * Looking for test storage... 00:06:54.692 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:54.692 09:16:39 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:54.692 09:16:39 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:54.692 09:16:39 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:54.692 09:16:39 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:54.692 09:16:39 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:54.692 09:16:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.692 09:16:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.692 ************************************ 00:06:54.692 START TEST default_locks 00:06:54.692 ************************************ 00:06:54.692 09:16:39 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:54.692 09:16:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=615177 00:06:54.692 09:16:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:54.692 09:16:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 615177 00:06:54.692 09:16:39 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 615177 ']' 00:06:54.692 09:16:39 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.692 09:16:39 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:54.692 09:16:39 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.692 09:16:39 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:54.692 09:16:39 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.950 [2024-07-14 09:16:39.153551] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:54.950 [2024-07-14 09:16:39.153645] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid615177 ] 00:06:54.950 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.950 [2024-07-14 09:16:39.210455] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.950 [2024-07-14 09:16:39.297353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.208 09:16:39 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:55.208 09:16:39 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:55.208 09:16:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 615177 00:06:55.208 09:16:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 615177 00:06:55.208 09:16:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:55.464 lslocks: write error 00:06:55.464 09:16:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 615177 00:06:55.464 09:16:39 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 615177 ']' 00:06:55.464 09:16:39 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 615177 00:06:55.464 09:16:39 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:55.464 09:16:39 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:55.464 09:16:39 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 615177 00:06:55.722 09:16:39 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:55.722 09:16:39 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:55.723 09:16:39 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 615177' 00:06:55.723 killing process with pid 615177 00:06:55.723 09:16:39 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 615177 00:06:55.723 09:16:39 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 615177 00:06:55.980 09:16:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 615177 00:06:55.980 09:16:40 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:55.980 09:16:40 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 615177 00:06:55.981 09:16:40 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:55.981 09:16:40 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:55.981 09:16:40 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:55.981 09:16:40 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:55.981 09:16:40 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 615177 00:06:55.981 09:16:40 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 615177 ']' 00:06:55.981 09:16:40 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.981 09:16:40 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:55.981 09:16:40 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.981 09:16:40 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:55.981 09:16:40 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.981 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (615177) - No such process 00:06:55.981 ERROR: process (pid: 615177) is no longer running 00:06:55.981 09:16:40 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:55.981 09:16:40 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:55.981 09:16:40 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:55.981 09:16:40 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:55.981 09:16:40 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:55.981 09:16:40 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:55.981 09:16:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:55.981 09:16:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:55.981 09:16:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:55.981 09:16:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:55.981 00:06:55.981 real 0m1.252s 00:06:55.981 user 0m1.208s 00:06:55.981 sys 0m0.522s 00:06:55.981 09:16:40 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.981 09:16:40 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.981 ************************************ 00:06:55.981 END TEST default_locks 00:06:55.981 ************************************ 00:06:55.981 09:16:40 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:55.981 09:16:40 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:55.981 09:16:40 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:55.981 09:16:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.981 09:16:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.981 ************************************ 00:06:55.981 START TEST default_locks_via_rpc 00:06:55.981 ************************************ 00:06:55.981 09:16:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:55.981 09:16:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=615367 00:06:55.981 09:16:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:55.981 09:16:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 615367 00:06:55.981 09:16:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 615367 ']' 00:06:55.981 09:16:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.981 09:16:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:55.981 09:16:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.981 09:16:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:55.981 09:16:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.239 [2024-07-14 09:16:40.455731] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:56.239 [2024-07-14 09:16:40.455828] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid615367 ] 00:06:56.239 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.239 [2024-07-14 09:16:40.514292] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.239 [2024-07-14 09:16:40.601432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.498 09:16:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:56.498 09:16:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:56.498 09:16:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:56.498 09:16:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.498 09:16:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.498 09:16:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.498 09:16:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:56.498 09:16:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:56.498 09:16:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:56.498 09:16:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:56.498 09:16:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:56.498 09:16:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.498 09:16:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.498 09:16:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.498 09:16:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 615367 00:06:56.498 09:16:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 615367 00:06:56.498 09:16:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:56.755 09:16:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 615367 00:06:56.755 09:16:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 615367 ']' 00:06:56.755 09:16:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 615367 00:06:56.755 09:16:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:56.755 09:16:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:56.755 09:16:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 615367 00:06:57.013 09:16:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:57.013 09:16:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:57.013 09:16:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 615367' 00:06:57.013 killing process with pid 615367 00:06:57.013 09:16:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 615367 00:06:57.013 09:16:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 615367 00:06:57.272 00:06:57.272 real 0m1.218s 00:06:57.272 user 0m1.154s 00:06:57.273 sys 0m0.533s 00:06:57.273 09:16:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.273 09:16:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.273 ************************************ 00:06:57.273 END TEST default_locks_via_rpc 00:06:57.273 ************************************ 00:06:57.273 09:16:41 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:57.273 09:16:41 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:57.273 09:16:41 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:57.273 09:16:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.273 09:16:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.273 ************************************ 00:06:57.273 START TEST non_locking_app_on_locked_coremask 00:06:57.273 ************************************ 00:06:57.273 09:16:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:57.273 09:16:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=615622 00:06:57.273 09:16:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:57.273 09:16:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 615622 /var/tmp/spdk.sock 00:06:57.273 09:16:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 615622 ']' 00:06:57.273 09:16:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.273 09:16:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:57.273 09:16:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.273 09:16:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:57.273 09:16:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.531 [2024-07-14 09:16:41.725722] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:57.531 [2024-07-14 09:16:41.725807] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid615622 ] 00:06:57.531 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.531 [2024-07-14 09:16:41.787596] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.531 [2024-07-14 09:16:41.877248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.790 09:16:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:57.790 09:16:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:57.790 09:16:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=615632 00:06:57.790 09:16:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:57.790 09:16:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 615632 /var/tmp/spdk2.sock 00:06:57.790 09:16:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 615632 ']' 00:06:57.790 09:16:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:57.790 09:16:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:57.790 09:16:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:57.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:57.790 09:16:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:57.790 09:16:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.790 [2024-07-14 09:16:42.187424] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:57.790 [2024-07-14 09:16:42.187511] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid615632 ] 00:06:57.790 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.048 [2024-07-14 09:16:42.286178] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:58.048 [2024-07-14 09:16:42.286220] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.048 [2024-07-14 09:16:42.470989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.984 09:16:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:58.984 09:16:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:58.984 09:16:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 615622 00:06:58.984 09:16:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:58.984 09:16:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 615622 00:06:59.241 lslocks: write error 00:06:59.241 09:16:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 615622 00:06:59.241 09:16:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 615622 ']' 00:06:59.241 09:16:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 615622 00:06:59.241 09:16:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:59.241 09:16:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:59.241 09:16:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 615622 00:06:59.241 09:16:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:59.241 09:16:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:59.241 09:16:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 615622' 00:06:59.241 killing process with pid 615622 00:06:59.241 09:16:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 615622 00:06:59.241 09:16:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 615622 00:07:00.172 09:16:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 615632 00:07:00.172 09:16:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 615632 ']' 00:07:00.172 09:16:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 615632 00:07:00.172 09:16:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:00.172 09:16:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:00.173 09:16:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 615632 00:07:00.173 09:16:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:00.173 09:16:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:00.173 09:16:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 615632' 00:07:00.173 killing process with pid 615632 00:07:00.173 09:16:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 615632 00:07:00.173 09:16:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 615632 00:07:00.430 00:07:00.430 real 0m3.086s 00:07:00.430 user 0m3.195s 00:07:00.430 sys 0m1.043s 00:07:00.430 09:16:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.430 09:16:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.430 ************************************ 00:07:00.430 END TEST non_locking_app_on_locked_coremask 00:07:00.430 ************************************ 00:07:00.430 09:16:44 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:00.430 09:16:44 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:00.430 09:16:44 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:00.430 09:16:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.430 09:16:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:00.430 ************************************ 00:07:00.430 START TEST locking_app_on_unlocked_coremask 00:07:00.430 ************************************ 00:07:00.430 09:16:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:07:00.430 09:16:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=615937 00:07:00.430 09:16:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:00.430 09:16:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 615937 /var/tmp/spdk.sock 00:07:00.430 09:16:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 615937 ']' 00:07:00.430 09:16:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.430 09:16:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:00.430 09:16:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.430 09:16:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:00.430 09:16:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.430 [2024-07-14 09:16:44.862758] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:00.431 [2024-07-14 09:16:44.862846] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid615937 ] 00:07:00.689 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.689 [2024-07-14 09:16:44.925117] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:00.689 [2024-07-14 09:16:44.925154] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.689 [2024-07-14 09:16:45.019097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.947 09:16:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:00.947 09:16:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:00.947 09:16:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=616066 00:07:00.947 09:16:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:00.947 09:16:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 616066 /var/tmp/spdk2.sock 00:07:00.947 09:16:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 616066 ']' 00:07:00.947 09:16:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:00.947 09:16:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:00.947 09:16:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:00.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:00.947 09:16:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:00.947 09:16:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.947 [2024-07-14 09:16:45.335775] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:00.947 [2024-07-14 09:16:45.335859] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid616066 ] 00:07:00.947 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.205 [2024-07-14 09:16:45.432548] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.205 [2024-07-14 09:16:45.616791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.163 09:16:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:02.163 09:16:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:02.163 09:16:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 616066 00:07:02.163 09:16:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 616066 00:07:02.163 09:16:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:02.421 lslocks: write error 00:07:02.421 09:16:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 615937 00:07:02.421 09:16:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 615937 ']' 00:07:02.421 09:16:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 615937 00:07:02.421 09:16:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:02.421 09:16:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:02.421 09:16:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 615937 00:07:02.679 09:16:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:02.679 09:16:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:02.679 09:16:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 615937' 00:07:02.679 killing process with pid 615937 00:07:02.679 09:16:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 615937 00:07:02.679 09:16:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 615937 00:07:03.615 09:16:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 616066 00:07:03.615 09:16:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 616066 ']' 00:07:03.615 09:16:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 616066 00:07:03.615 09:16:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:03.615 09:16:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:03.615 09:16:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 616066 00:07:03.615 09:16:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:03.615 09:16:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:03.615 09:16:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 616066' 00:07:03.615 killing process with pid 616066 00:07:03.615 09:16:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 616066 00:07:03.615 09:16:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 616066 00:07:03.873 00:07:03.873 real 0m3.329s 00:07:03.873 user 0m3.458s 00:07:03.873 sys 0m1.105s 00:07:03.873 09:16:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.873 09:16:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:03.873 ************************************ 00:07:03.873 END TEST locking_app_on_unlocked_coremask 00:07:03.873 ************************************ 00:07:03.873 09:16:48 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:03.873 09:16:48 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:03.873 09:16:48 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:03.873 09:16:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.873 09:16:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:03.873 ************************************ 00:07:03.873 START TEST locking_app_on_locked_coremask 00:07:03.873 ************************************ 00:07:03.873 09:16:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:07:03.873 09:16:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=616380 00:07:03.873 09:16:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:03.873 09:16:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 616380 /var/tmp/spdk.sock 00:07:03.873 09:16:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 616380 ']' 00:07:03.873 09:16:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.873 09:16:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:03.873 09:16:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.873 09:16:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:03.873 09:16:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:03.873 [2024-07-14 09:16:48.241382] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:03.873 [2024-07-14 09:16:48.241448] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid616380 ] 00:07:03.873 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.873 [2024-07-14 09:16:48.300162] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.132 [2024-07-14 09:16:48.389083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.391 09:16:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:04.391 09:16:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:04.391 09:16:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=616503 00:07:04.391 09:16:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:04.391 09:16:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 616503 /var/tmp/spdk2.sock 00:07:04.391 09:16:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:04.391 09:16:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 616503 /var/tmp/spdk2.sock 00:07:04.391 09:16:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:04.391 09:16:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.391 09:16:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:04.391 09:16:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.391 09:16:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 616503 /var/tmp/spdk2.sock 00:07:04.391 09:16:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 616503 ']' 00:07:04.391 09:16:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:04.391 09:16:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:04.391 09:16:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:04.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:04.391 09:16:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:04.391 09:16:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.391 [2024-07-14 09:16:48.685321] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:04.391 [2024-07-14 09:16:48.685406] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid616503 ] 00:07:04.391 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.391 [2024-07-14 09:16:48.781267] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 616380 has claimed it. 00:07:04.391 [2024-07-14 09:16:48.781329] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:04.956 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (616503) - No such process 00:07:04.956 ERROR: process (pid: 616503) is no longer running 00:07:04.956 09:16:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:04.956 09:16:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:04.956 09:16:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:04.956 09:16:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:04.956 09:16:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:04.956 09:16:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:04.956 09:16:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 616380 00:07:04.956 09:16:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 616380 00:07:04.956 09:16:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:05.523 lslocks: write error 00:07:05.523 09:16:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 616380 00:07:05.523 09:16:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 616380 ']' 00:07:05.523 09:16:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 616380 00:07:05.523 09:16:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:05.523 09:16:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:05.523 09:16:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 616380 00:07:05.523 09:16:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:05.523 09:16:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:05.523 09:16:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 616380' 00:07:05.523 killing process with pid 616380 00:07:05.523 09:16:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 616380 00:07:05.523 09:16:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 616380 00:07:05.780 00:07:05.780 real 0m1.987s 00:07:05.780 user 0m2.147s 00:07:05.780 sys 0m0.632s 00:07:05.780 09:16:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.780 09:16:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:05.780 ************************************ 00:07:05.780 END TEST locking_app_on_locked_coremask 00:07:05.780 ************************************ 00:07:05.780 09:16:50 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:05.780 09:16:50 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:05.780 09:16:50 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:05.780 09:16:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.780 09:16:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:05.780 ************************************ 00:07:05.780 START TEST locking_overlapped_coremask 00:07:05.780 ************************************ 00:07:05.780 09:16:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:07:05.780 09:16:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=616677 00:07:05.780 09:16:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:05.780 09:16:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 616677 /var/tmp/spdk.sock 00:07:05.780 09:16:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 616677 ']' 00:07:05.780 09:16:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.780 09:16:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:05.780 09:16:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.780 09:16:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:05.780 09:16:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.037 [2024-07-14 09:16:50.277806] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:06.037 [2024-07-14 09:16:50.277909] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid616677 ] 00:07:06.037 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.037 [2024-07-14 09:16:50.341110] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:06.038 [2024-07-14 09:16:50.431966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.038 [2024-07-14 09:16:50.432022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.038 [2024-07-14 09:16:50.432026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.295 09:16:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:06.295 09:16:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:06.295 09:16:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=616798 00:07:06.295 09:16:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 616798 /var/tmp/spdk2.sock 00:07:06.295 09:16:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:06.295 09:16:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:06.295 09:16:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 616798 /var/tmp/spdk2.sock 00:07:06.295 09:16:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:06.295 09:16:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.295 09:16:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:06.295 09:16:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.295 09:16:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 616798 /var/tmp/spdk2.sock 00:07:06.295 09:16:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 616798 ']' 00:07:06.295 09:16:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:06.296 09:16:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:06.296 09:16:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:06.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:06.296 09:16:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:06.296 09:16:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.296 [2024-07-14 09:16:50.734329] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:06.296 [2024-07-14 09:16:50.734415] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid616798 ] 00:07:06.553 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.553 [2024-07-14 09:16:50.824301] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 616677 has claimed it. 00:07:06.553 [2024-07-14 09:16:50.824372] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:07.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (616798) - No such process 00:07:07.119 ERROR: process (pid: 616798) is no longer running 00:07:07.119 09:16:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:07.119 09:16:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:07.119 09:16:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:07.119 09:16:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:07.119 09:16:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:07.119 09:16:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:07.119 09:16:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:07.119 09:16:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:07.119 09:16:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:07.119 09:16:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:07.119 09:16:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 616677 00:07:07.119 09:16:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 616677 ']' 00:07:07.119 09:16:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 616677 00:07:07.119 09:16:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:07:07.119 09:16:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:07.119 09:16:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 616677 00:07:07.119 09:16:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:07.119 09:16:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:07.119 09:16:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 616677' 00:07:07.119 killing process with pid 616677 00:07:07.119 09:16:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 616677 00:07:07.119 09:16:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 616677 00:07:07.687 00:07:07.687 real 0m1.633s 00:07:07.687 user 0m4.411s 00:07:07.687 sys 0m0.450s 00:07:07.687 09:16:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.687 09:16:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:07.687 ************************************ 00:07:07.687 END TEST locking_overlapped_coremask 00:07:07.687 ************************************ 00:07:07.687 09:16:51 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:07.687 09:16:51 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:07.687 09:16:51 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:07.687 09:16:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.687 09:16:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:07.687 ************************************ 00:07:07.687 START TEST locking_overlapped_coremask_via_rpc 00:07:07.687 ************************************ 00:07:07.687 09:16:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:07:07.687 09:16:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=616964 00:07:07.687 09:16:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:07.687 09:16:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 616964 /var/tmp/spdk.sock 00:07:07.687 09:16:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 616964 ']' 00:07:07.687 09:16:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.687 09:16:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:07.687 09:16:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.687 09:16:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:07.687 09:16:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.687 [2024-07-14 09:16:51.956427] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:07.687 [2024-07-14 09:16:51.956505] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid616964 ] 00:07:07.687 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.687 [2024-07-14 09:16:52.018997] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:07.687 [2024-07-14 09:16:52.019036] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:07.687 [2024-07-14 09:16:52.110195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.687 [2024-07-14 09:16:52.110249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:07.687 [2024-07-14 09:16:52.110267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.946 09:16:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:07.946 09:16:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:07.946 09:16:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=616978 00:07:07.946 09:16:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:07.946 09:16:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 616978 /var/tmp/spdk2.sock 00:07:07.946 09:16:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 616978 ']' 00:07:07.946 09:16:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:07.946 09:16:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:07.946 09:16:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:07.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:07.946 09:16:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:07.946 09:16:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.205 [2024-07-14 09:16:52.412356] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:08.205 [2024-07-14 09:16:52.412439] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid616978 ] 00:07:08.205 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.205 [2024-07-14 09:16:52.501733] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:08.205 [2024-07-14 09:16:52.501772] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:08.463 [2024-07-14 09:16:52.680787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:08.463 [2024-07-14 09:16:52.680853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:08.463 [2024-07-14 09:16:52.680855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.029 09:16:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:09.029 09:16:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:09.029 09:16:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:09.029 09:16:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.029 09:16:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.029 09:16:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.029 09:16:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:09.029 09:16:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:09.029 09:16:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:09.029 09:16:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:09.029 09:16:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:09.029 09:16:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:09.029 09:16:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:09.029 09:16:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:09.029 09:16:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.029 09:16:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.029 [2024-07-14 09:16:53.347966] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 616964 has claimed it. 00:07:09.029 request: 00:07:09.029 { 00:07:09.029 "method": "framework_enable_cpumask_locks", 00:07:09.029 "req_id": 1 00:07:09.029 } 00:07:09.029 Got JSON-RPC error response 00:07:09.029 response: 00:07:09.029 { 00:07:09.029 "code": -32603, 00:07:09.029 "message": "Failed to claim CPU core: 2" 00:07:09.029 } 00:07:09.029 09:16:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:09.029 09:16:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:09.029 09:16:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:09.029 09:16:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:09.029 09:16:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:09.029 09:16:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 616964 /var/tmp/spdk.sock 00:07:09.029 09:16:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 616964 ']' 00:07:09.029 09:16:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.029 09:16:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:09.029 09:16:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.029 09:16:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:09.029 09:16:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.286 09:16:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:09.286 09:16:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:09.286 09:16:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 616978 /var/tmp/spdk2.sock 00:07:09.286 09:16:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 616978 ']' 00:07:09.286 09:16:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:09.286 09:16:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:09.286 09:16:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:09.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:09.286 09:16:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:09.286 09:16:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.544 09:16:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:09.544 09:16:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:09.544 09:16:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:09.544 09:16:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:09.544 09:16:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:09.544 09:16:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:09.544 00:07:09.544 real 0m1.922s 00:07:09.544 user 0m0.965s 00:07:09.544 sys 0m0.182s 00:07:09.544 09:16:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.544 09:16:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.544 ************************************ 00:07:09.544 END TEST locking_overlapped_coremask_via_rpc 00:07:09.544 ************************************ 00:07:09.544 09:16:53 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:09.544 09:16:53 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:09.544 09:16:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 616964 ]] 00:07:09.544 09:16:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 616964 00:07:09.544 09:16:53 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 616964 ']' 00:07:09.544 09:16:53 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 616964 00:07:09.544 09:16:53 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:09.544 09:16:53 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:09.544 09:16:53 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 616964 00:07:09.544 09:16:53 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:09.544 09:16:53 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:09.544 09:16:53 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 616964' 00:07:09.544 killing process with pid 616964 00:07:09.544 09:16:53 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 616964 00:07:09.544 09:16:53 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 616964 00:07:10.111 09:16:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 616978 ]] 00:07:10.111 09:16:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 616978 00:07:10.111 09:16:54 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 616978 ']' 00:07:10.111 09:16:54 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 616978 00:07:10.111 09:16:54 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:10.111 09:16:54 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:10.111 09:16:54 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 616978 00:07:10.111 09:16:54 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:10.111 09:16:54 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:10.111 09:16:54 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 616978' 00:07:10.111 killing process with pid 616978 00:07:10.111 09:16:54 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 616978 00:07:10.111 09:16:54 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 616978 00:07:10.369 09:16:54 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:10.369 09:16:54 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:10.369 09:16:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 616964 ]] 00:07:10.369 09:16:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 616964 00:07:10.369 09:16:54 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 616964 ']' 00:07:10.369 09:16:54 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 616964 00:07:10.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (616964) - No such process 00:07:10.369 09:16:54 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 616964 is not found' 00:07:10.369 Process with pid 616964 is not found 00:07:10.369 09:16:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 616978 ]] 00:07:10.369 09:16:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 616978 00:07:10.369 09:16:54 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 616978 ']' 00:07:10.369 09:16:54 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 616978 00:07:10.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (616978) - No such process 00:07:10.369 09:16:54 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 616978 is not found' 00:07:10.369 Process with pid 616978 is not found 00:07:10.369 09:16:54 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:10.369 00:07:10.369 real 0m15.691s 00:07:10.369 user 0m27.110s 00:07:10.369 sys 0m5.347s 00:07:10.369 09:16:54 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.369 09:16:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:10.369 ************************************ 00:07:10.369 END TEST cpu_locks 00:07:10.369 ************************************ 00:07:10.369 09:16:54 event -- common/autotest_common.sh@1142 -- # return 0 00:07:10.369 00:07:10.369 real 0m39.943s 00:07:10.369 user 1m15.321s 00:07:10.369 sys 0m9.304s 00:07:10.369 09:16:54 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.369 09:16:54 event -- common/autotest_common.sh@10 -- # set +x 00:07:10.369 ************************************ 00:07:10.369 END TEST event 00:07:10.369 ************************************ 00:07:10.369 09:16:54 -- common/autotest_common.sh@1142 -- # return 0 00:07:10.369 09:16:54 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:10.369 09:16:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:10.369 09:16:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.369 09:16:54 -- common/autotest_common.sh@10 -- # set +x 00:07:10.369 ************************************ 00:07:10.369 START TEST thread 00:07:10.369 ************************************ 00:07:10.369 09:16:54 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:10.628 * Looking for test storage... 00:07:10.628 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:10.628 09:16:54 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:10.628 09:16:54 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:10.628 09:16:54 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.628 09:16:54 thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.628 ************************************ 00:07:10.628 START TEST thread_poller_perf 00:07:10.628 ************************************ 00:07:10.628 09:16:54 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:10.628 [2024-07-14 09:16:54.866966] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:10.628 [2024-07-14 09:16:54.867030] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid617347 ] 00:07:10.628 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.628 [2024-07-14 09:16:54.931194] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.628 [2024-07-14 09:16:55.020771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.628 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:12.000 ====================================== 00:07:12.000 busy:2712082580 (cyc) 00:07:12.000 total_run_count: 292000 00:07:12.000 tsc_hz: 2700000000 (cyc) 00:07:12.000 ====================================== 00:07:12.000 poller_cost: 9287 (cyc), 3439 (nsec) 00:07:12.000 00:07:12.000 real 0m1.257s 00:07:12.000 user 0m1.164s 00:07:12.000 sys 0m0.087s 00:07:12.000 09:16:56 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.000 09:16:56 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:12.000 ************************************ 00:07:12.000 END TEST thread_poller_perf 00:07:12.000 ************************************ 00:07:12.000 09:16:56 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:12.000 09:16:56 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:12.000 09:16:56 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:12.000 09:16:56 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.000 09:16:56 thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.000 ************************************ 00:07:12.000 START TEST thread_poller_perf 00:07:12.000 ************************************ 00:07:12.000 09:16:56 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:12.000 [2024-07-14 09:16:56.172041] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:12.000 [2024-07-14 09:16:56.172100] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid617499 ] 00:07:12.000 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.000 [2024-07-14 09:16:56.236230] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.000 [2024-07-14 09:16:56.330861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.000 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:12.960 ====================================== 00:07:12.960 busy:2702443181 (cyc) 00:07:12.960 total_run_count: 3856000 00:07:12.960 tsc_hz: 2700000000 (cyc) 00:07:12.960 ====================================== 00:07:12.960 poller_cost: 700 (cyc), 259 (nsec) 00:07:12.960 00:07:12.960 real 0m1.252s 00:07:12.960 user 0m1.163s 00:07:12.960 sys 0m0.083s 00:07:12.960 09:16:57 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.960 09:16:57 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:12.960 ************************************ 00:07:13.218 END TEST thread_poller_perf 00:07:13.218 ************************************ 00:07:13.218 09:16:57 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:13.218 09:16:57 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:13.218 00:07:13.218 real 0m2.648s 00:07:13.218 user 0m2.380s 00:07:13.218 sys 0m0.266s 00:07:13.218 09:16:57 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.218 09:16:57 thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.218 ************************************ 00:07:13.218 END TEST thread 00:07:13.218 ************************************ 00:07:13.218 09:16:57 -- common/autotest_common.sh@1142 -- # return 0 00:07:13.218 09:16:57 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:13.218 09:16:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:13.218 09:16:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.218 09:16:57 -- common/autotest_common.sh@10 -- # set +x 00:07:13.218 ************************************ 00:07:13.218 START TEST accel 00:07:13.218 ************************************ 00:07:13.218 09:16:57 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:13.218 * Looking for test storage... 00:07:13.218 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:13.218 09:16:57 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:13.218 09:16:57 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:13.218 09:16:57 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:13.218 09:16:57 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=617811 00:07:13.218 09:16:57 accel -- accel/accel.sh@63 -- # waitforlisten 617811 00:07:13.218 09:16:57 accel -- common/autotest_common.sh@829 -- # '[' -z 617811 ']' 00:07:13.218 09:16:57 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:13.218 09:16:57 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.218 09:16:57 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:13.218 09:16:57 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:13.218 09:16:57 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.218 09:16:57 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.218 09:16:57 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.218 09:16:57 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:13.218 09:16:57 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.218 09:16:57 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.218 09:16:57 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.218 09:16:57 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.218 09:16:57 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:13.218 09:16:57 accel -- accel/accel.sh@41 -- # jq -r . 00:07:13.218 [2024-07-14 09:16:57.586302] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:13.218 [2024-07-14 09:16:57.586382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid617811 ] 00:07:13.218 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.218 [2024-07-14 09:16:57.645191] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.477 [2024-07-14 09:16:57.734770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.736 09:16:57 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:13.736 09:16:57 accel -- common/autotest_common.sh@862 -- # return 0 00:07:13.736 09:16:57 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:13.736 09:16:57 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:13.736 09:16:57 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:13.736 09:16:57 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:13.736 09:16:57 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:13.736 09:16:57 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:13.736 09:16:57 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:13.736 09:16:57 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.736 09:16:57 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.736 09:16:58 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.736 09:16:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.736 09:16:58 accel -- accel/accel.sh@72 -- # IFS== 00:07:13.736 09:16:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:13.736 09:16:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:13.736 09:16:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.736 09:16:58 accel -- accel/accel.sh@72 -- # IFS== 00:07:13.736 09:16:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:13.736 09:16:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:13.736 09:16:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.736 09:16:58 accel -- accel/accel.sh@72 -- # IFS== 00:07:13.736 09:16:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:13.736 09:16:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:13.736 09:16:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.736 09:16:58 accel -- accel/accel.sh@72 -- # IFS== 00:07:13.736 09:16:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:13.736 09:16:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:13.736 09:16:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.736 09:16:58 accel -- accel/accel.sh@72 -- # IFS== 00:07:13.736 09:16:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:13.736 09:16:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:13.736 09:16:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.736 09:16:58 accel -- accel/accel.sh@72 -- # IFS== 00:07:13.736 09:16:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:13.736 09:16:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:13.736 09:16:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.736 09:16:58 accel -- accel/accel.sh@72 -- # IFS== 00:07:13.736 09:16:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:13.736 09:16:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:13.736 09:16:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.736 09:16:58 accel -- accel/accel.sh@72 -- # IFS== 00:07:13.736 09:16:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:13.736 09:16:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:13.736 09:16:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.736 09:16:58 accel -- accel/accel.sh@72 -- # IFS== 00:07:13.736 09:16:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:13.736 09:16:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:13.736 09:16:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.736 09:16:58 accel -- accel/accel.sh@72 -- # IFS== 00:07:13.736 09:16:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:13.736 09:16:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:13.736 09:16:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.736 09:16:58 accel -- accel/accel.sh@72 -- # IFS== 00:07:13.736 09:16:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:13.736 09:16:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:13.736 09:16:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.736 09:16:58 accel -- accel/accel.sh@72 -- # IFS== 00:07:13.736 09:16:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:13.736 09:16:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:13.736 09:16:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.736 09:16:58 accel -- accel/accel.sh@72 -- # IFS== 00:07:13.736 09:16:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:13.736 09:16:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:13.736 09:16:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.736 09:16:58 accel -- accel/accel.sh@72 -- # IFS== 00:07:13.736 09:16:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:13.736 09:16:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:13.736 09:16:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.736 09:16:58 accel -- accel/accel.sh@72 -- # IFS== 00:07:13.736 09:16:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:13.736 09:16:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:13.736 09:16:58 accel -- accel/accel.sh@75 -- # killprocess 617811 00:07:13.736 09:16:58 accel -- common/autotest_common.sh@948 -- # '[' -z 617811 ']' 00:07:13.736 09:16:58 accel -- common/autotest_common.sh@952 -- # kill -0 617811 00:07:13.736 09:16:58 accel -- common/autotest_common.sh@953 -- # uname 00:07:13.736 09:16:58 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:13.736 09:16:58 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 617811 00:07:13.736 09:16:58 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:13.736 09:16:58 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:13.736 09:16:58 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 617811' 00:07:13.736 killing process with pid 617811 00:07:13.736 09:16:58 accel -- common/autotest_common.sh@967 -- # kill 617811 00:07:13.736 09:16:58 accel -- common/autotest_common.sh@972 -- # wait 617811 00:07:14.304 09:16:58 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:14.304 09:16:58 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:14.304 09:16:58 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:14.304 09:16:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.304 09:16:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.304 09:16:58 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:07:14.304 09:16:58 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:14.304 09:16:58 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:14.304 09:16:58 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.304 09:16:58 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.304 09:16:58 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.304 09:16:58 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.304 09:16:58 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.304 09:16:58 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:14.304 09:16:58 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:14.304 09:16:58 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.304 09:16:58 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:14.304 09:16:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:14.304 09:16:58 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:14.304 09:16:58 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:14.304 09:16:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.304 09:16:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.304 ************************************ 00:07:14.304 START TEST accel_missing_filename 00:07:14.304 ************************************ 00:07:14.304 09:16:58 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:07:14.304 09:16:58 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:14.304 09:16:58 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:14.304 09:16:58 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:14.304 09:16:58 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.304 09:16:58 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:14.305 09:16:58 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.305 09:16:58 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:14.305 09:16:58 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:14.305 09:16:58 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:14.305 09:16:58 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.305 09:16:58 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.305 09:16:58 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.305 09:16:58 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.305 09:16:58 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.305 09:16:58 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:14.305 09:16:58 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:14.305 [2024-07-14 09:16:58.597756] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:14.305 [2024-07-14 09:16:58.597823] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid617931 ] 00:07:14.305 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.305 [2024-07-14 09:16:58.662343] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.305 [2024-07-14 09:16:58.755509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.564 [2024-07-14 09:16:58.815431] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:14.564 [2024-07-14 09:16:58.902674] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:14.564 A filename is required. 00:07:14.564 09:16:58 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:14.564 09:16:58 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:14.564 09:16:58 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:14.564 09:16:58 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:14.564 09:16:58 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:14.564 09:16:58 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:14.564 00:07:14.564 real 0m0.407s 00:07:14.564 user 0m0.288s 00:07:14.564 sys 0m0.148s 00:07:14.564 09:16:58 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.564 09:16:58 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:14.564 ************************************ 00:07:14.564 END TEST accel_missing_filename 00:07:14.564 ************************************ 00:07:14.564 09:16:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:14.564 09:16:59 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:14.564 09:16:59 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:14.564 09:16:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.564 09:16:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.821 ************************************ 00:07:14.821 START TEST accel_compress_verify 00:07:14.821 ************************************ 00:07:14.821 09:16:59 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:14.821 09:16:59 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:14.821 09:16:59 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:14.821 09:16:59 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:14.821 09:16:59 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.821 09:16:59 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:14.821 09:16:59 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.821 09:16:59 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:14.821 09:16:59 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:14.821 09:16:59 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:14.821 09:16:59 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.821 09:16:59 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.821 09:16:59 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.821 09:16:59 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.821 09:16:59 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.821 09:16:59 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:14.821 09:16:59 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:14.821 [2024-07-14 09:16:59.055720] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:14.821 [2024-07-14 09:16:59.055790] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid618007 ] 00:07:14.821 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.821 [2024-07-14 09:16:59.120016] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.821 [2024-07-14 09:16:59.213573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.821 [2024-07-14 09:16:59.272627] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:15.081 [2024-07-14 09:16:59.345976] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:15.081 00:07:15.081 Compression does not support the verify option, aborting. 00:07:15.081 09:16:59 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:15.081 09:16:59 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:15.081 09:16:59 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:15.081 09:16:59 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:15.081 09:16:59 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:15.081 09:16:59 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:15.081 00:07:15.081 real 0m0.393s 00:07:15.081 user 0m0.285s 00:07:15.081 sys 0m0.142s 00:07:15.081 09:16:59 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.081 09:16:59 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:15.081 ************************************ 00:07:15.081 END TEST accel_compress_verify 00:07:15.081 ************************************ 00:07:15.081 09:16:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:15.081 09:16:59 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:15.081 09:16:59 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:15.081 09:16:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.081 09:16:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.081 ************************************ 00:07:15.081 START TEST accel_wrong_workload 00:07:15.081 ************************************ 00:07:15.081 09:16:59 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:07:15.081 09:16:59 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:15.081 09:16:59 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:15.081 09:16:59 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:15.081 09:16:59 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:15.081 09:16:59 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:15.081 09:16:59 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:15.081 09:16:59 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:15.081 09:16:59 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:15.081 09:16:59 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:15.081 09:16:59 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:15.081 09:16:59 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:15.081 09:16:59 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.081 09:16:59 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.081 09:16:59 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:15.081 09:16:59 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:15.081 09:16:59 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:15.081 Unsupported workload type: foobar 00:07:15.082 [2024-07-14 09:16:59.497707] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:15.082 accel_perf options: 00:07:15.082 [-h help message] 00:07:15.082 [-q queue depth per core] 00:07:15.082 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:15.082 [-T number of threads per core 00:07:15.082 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:15.082 [-t time in seconds] 00:07:15.082 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:15.082 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:15.082 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:15.082 [-l for compress/decompress workloads, name of uncompressed input file 00:07:15.082 [-S for crc32c workload, use this seed value (default 0) 00:07:15.082 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:15.082 [-f for fill workload, use this BYTE value (default 255) 00:07:15.082 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:15.082 [-y verify result if this switch is on] 00:07:15.082 [-a tasks to allocate per core (default: same value as -q)] 00:07:15.082 Can be used to spread operations across a wider range of memory. 00:07:15.082 09:16:59 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:15.082 09:16:59 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:15.082 09:16:59 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:15.082 09:16:59 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:15.082 00:07:15.082 real 0m0.023s 00:07:15.082 user 0m0.014s 00:07:15.082 sys 0m0.010s 00:07:15.082 09:16:59 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.082 09:16:59 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:15.082 ************************************ 00:07:15.082 END TEST accel_wrong_workload 00:07:15.082 ************************************ 00:07:15.082 Error: writing output failed: Broken pipe 00:07:15.082 09:16:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:15.082 09:16:59 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:15.082 09:16:59 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:15.082 09:16:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.082 09:16:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.379 ************************************ 00:07:15.379 START TEST accel_negative_buffers 00:07:15.379 ************************************ 00:07:15.379 09:16:59 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:15.379 09:16:59 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:15.379 09:16:59 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:15.379 09:16:59 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:15.379 09:16:59 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:15.379 09:16:59 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:15.379 09:16:59 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:15.379 09:16:59 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:15.379 09:16:59 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:15.379 09:16:59 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:15.379 09:16:59 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:15.379 09:16:59 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:15.379 09:16:59 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.379 09:16:59 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.379 09:16:59 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:15.379 09:16:59 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:15.379 09:16:59 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:15.379 -x option must be non-negative. 00:07:15.379 [2024-07-14 09:16:59.559781] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:15.379 accel_perf options: 00:07:15.379 [-h help message] 00:07:15.379 [-q queue depth per core] 00:07:15.379 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:15.379 [-T number of threads per core 00:07:15.379 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:15.379 [-t time in seconds] 00:07:15.379 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:15.379 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:15.379 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:15.379 [-l for compress/decompress workloads, name of uncompressed input file 00:07:15.379 [-S for crc32c workload, use this seed value (default 0) 00:07:15.379 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:15.379 [-f for fill workload, use this BYTE value (default 255) 00:07:15.379 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:15.379 [-y verify result if this switch is on] 00:07:15.379 [-a tasks to allocate per core (default: same value as -q)] 00:07:15.379 Can be used to spread operations across a wider range of memory. 00:07:15.379 09:16:59 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:15.379 09:16:59 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:15.379 09:16:59 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:15.379 09:16:59 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:15.379 00:07:15.379 real 0m0.021s 00:07:15.379 user 0m0.013s 00:07:15.379 sys 0m0.008s 00:07:15.379 09:16:59 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.379 09:16:59 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:15.379 ************************************ 00:07:15.379 END TEST accel_negative_buffers 00:07:15.379 ************************************ 00:07:15.379 Error: writing output failed: Broken pipe 00:07:15.379 09:16:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:15.379 09:16:59 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:15.379 09:16:59 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:15.379 09:16:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.379 09:16:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.379 ************************************ 00:07:15.379 START TEST accel_crc32c 00:07:15.379 ************************************ 00:07:15.379 09:16:59 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:15.379 09:16:59 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:15.379 09:16:59 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:15.379 09:16:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.379 09:16:59 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:15.379 09:16:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.379 09:16:59 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:15.379 09:16:59 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:15.379 09:16:59 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:15.379 09:16:59 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:15.379 09:16:59 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.379 09:16:59 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.379 09:16:59 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:15.379 09:16:59 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:15.379 09:16:59 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:15.379 [2024-07-14 09:16:59.631786] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:15.380 [2024-07-14 09:16:59.631854] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid618077 ] 00:07:15.380 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.380 [2024-07-14 09:16:59.696441] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.380 [2024-07-14 09:16:59.793637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.637 09:16:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:15.637 09:16:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.637 09:16:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.637 09:16:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.637 09:16:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:15.637 09:16:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.637 09:16:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.637 09:16:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.637 09:16:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:15.637 09:16:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.637 09:16:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:15.638 09:16:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.570 09:17:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:16.570 09:17:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.570 09:17:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.570 09:17:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.570 09:17:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:16.570 09:17:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.570 09:17:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.570 09:17:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.570 09:17:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:16.828 09:17:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.828 09:17:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.828 09:17:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.828 09:17:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:16.828 09:17:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.828 09:17:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.828 09:17:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.828 09:17:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:16.828 09:17:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.828 09:17:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.828 09:17:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.828 09:17:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:16.828 09:17:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.828 09:17:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.828 09:17:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.828 09:17:01 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:16.828 09:17:01 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:16.828 09:17:01 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.829 00:07:16.829 real 0m1.412s 00:07:16.829 user 0m1.274s 00:07:16.829 sys 0m0.139s 00:07:16.829 09:17:01 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.829 09:17:01 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:16.829 ************************************ 00:07:16.829 END TEST accel_crc32c 00:07:16.829 ************************************ 00:07:16.829 09:17:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:16.829 09:17:01 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:16.829 09:17:01 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:16.829 09:17:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.829 09:17:01 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.829 ************************************ 00:07:16.829 START TEST accel_crc32c_C2 00:07:16.829 ************************************ 00:07:16.829 09:17:01 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:16.829 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:16.829 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:16.829 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.829 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:16.829 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.829 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:16.829 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.829 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.829 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.829 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.829 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.829 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.829 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:16.829 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:16.829 [2024-07-14 09:17:01.085204] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:16.829 [2024-07-14 09:17:01.085267] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid618349 ] 00:07:16.829 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.829 [2024-07-14 09:17:01.146328] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.829 [2024-07-14 09:17:01.239547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:17.087 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.088 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.088 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.088 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:17.088 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.088 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.088 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.088 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:17.088 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.088 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.088 09:17:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.019 09:17:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:18.019 09:17:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.019 09:17:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.019 09:17:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.019 09:17:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:18.019 09:17:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.019 09:17:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.019 09:17:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.019 09:17:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:18.019 09:17:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.019 09:17:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.019 09:17:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.019 09:17:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:18.019 09:17:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.019 09:17:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.019 09:17:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.019 09:17:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:18.019 09:17:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.019 09:17:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.019 09:17:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.019 09:17:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:18.019 09:17:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.019 09:17:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.019 09:17:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.019 09:17:02 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:18.019 09:17:02 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:18.019 09:17:02 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.019 00:07:18.019 real 0m1.403s 00:07:18.019 user 0m1.260s 00:07:18.019 sys 0m0.145s 00:07:18.278 09:17:02 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.278 09:17:02 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:18.278 ************************************ 00:07:18.278 END TEST accel_crc32c_C2 00:07:18.278 ************************************ 00:07:18.278 09:17:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:18.278 09:17:02 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:18.278 09:17:02 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:18.278 09:17:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.278 09:17:02 accel -- common/autotest_common.sh@10 -- # set +x 00:07:18.278 ************************************ 00:07:18.278 START TEST accel_copy 00:07:18.278 ************************************ 00:07:18.278 09:17:02 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:18.278 09:17:02 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:18.278 09:17:02 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:18.278 09:17:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:18.278 09:17:02 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:18.278 09:17:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.278 09:17:02 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:18.278 09:17:02 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:18.278 09:17:02 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:18.278 09:17:02 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:18.278 09:17:02 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.278 09:17:02 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.278 09:17:02 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:18.278 09:17:02 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:18.278 09:17:02 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:18.278 [2024-07-14 09:17:02.530571] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:18.278 [2024-07-14 09:17:02.530637] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid618509 ] 00:07:18.278 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.278 [2024-07-14 09:17:02.592689] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.278 [2024-07-14 09:17:02.684166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.536 09:17:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:18.537 09:17:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:18.537 09:17:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:18.537 09:17:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.470 09:17:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:19.470 09:17:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.470 09:17:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.470 09:17:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.470 09:17:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:19.470 09:17:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.470 09:17:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.470 09:17:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.470 09:17:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:19.470 09:17:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.470 09:17:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.470 09:17:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.470 09:17:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:19.470 09:17:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.470 09:17:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.470 09:17:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.470 09:17:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:19.470 09:17:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.470 09:17:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.470 09:17:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.470 09:17:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:19.470 09:17:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.470 09:17:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.470 09:17:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.470 09:17:03 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:19.470 09:17:03 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:19.470 09:17:03 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.470 00:07:19.470 real 0m1.387s 00:07:19.470 user 0m1.247s 00:07:19.470 sys 0m0.140s 00:07:19.470 09:17:03 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.470 09:17:03 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:19.470 ************************************ 00:07:19.470 END TEST accel_copy 00:07:19.470 ************************************ 00:07:19.728 09:17:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:19.728 09:17:03 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:19.728 09:17:03 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:19.728 09:17:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.728 09:17:03 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.728 ************************************ 00:07:19.728 START TEST accel_fill 00:07:19.728 ************************************ 00:07:19.728 09:17:03 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:19.728 09:17:03 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:19.728 09:17:03 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:19.728 09:17:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:19.728 09:17:03 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:19.728 09:17:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:19.728 09:17:03 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:19.728 09:17:03 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:19.728 09:17:03 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.728 09:17:03 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.728 09:17:03 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.728 09:17:03 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.728 09:17:03 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.728 09:17:03 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:19.728 09:17:03 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:19.728 [2024-07-14 09:17:03.970445] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:19.728 [2024-07-14 09:17:03.970510] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid618667 ] 00:07:19.728 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.728 [2024-07-14 09:17:04.033340] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.728 [2024-07-14 09:17:04.124707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:19.986 09:17:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:20.919 09:17:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:20.919 09:17:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:20.919 09:17:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:20.919 09:17:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:20.919 09:17:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:20.919 09:17:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:20.919 09:17:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:20.919 09:17:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:20.919 09:17:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:20.919 09:17:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:20.919 09:17:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:20.919 09:17:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:20.919 09:17:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:20.919 09:17:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:20.919 09:17:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:20.919 09:17:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:20.919 09:17:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:20.919 09:17:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:20.919 09:17:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:20.919 09:17:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:20.919 09:17:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:20.919 09:17:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:20.919 09:17:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:20.919 09:17:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:20.919 09:17:05 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:20.919 09:17:05 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:20.919 09:17:05 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.919 00:07:20.919 real 0m1.405s 00:07:20.919 user 0m1.254s 00:07:20.919 sys 0m0.152s 00:07:20.919 09:17:05 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.919 09:17:05 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:20.919 ************************************ 00:07:20.919 END TEST accel_fill 00:07:20.919 ************************************ 00:07:21.178 09:17:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:21.178 09:17:05 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:21.178 09:17:05 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:21.178 09:17:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.178 09:17:05 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.178 ************************************ 00:07:21.178 START TEST accel_copy_crc32c 00:07:21.178 ************************************ 00:07:21.178 09:17:05 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:21.178 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:21.178 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:21.178 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.178 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:21.178 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.178 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:21.178 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:21.178 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.178 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.178 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.178 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.178 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.178 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:21.178 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:21.178 [2024-07-14 09:17:05.418119] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:21.178 [2024-07-14 09:17:05.418189] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid618889 ] 00:07:21.178 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.178 [2024-07-14 09:17:05.479240] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.178 [2024-07-14 09:17:05.572529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.436 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.437 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:21.437 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:21.437 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.437 09:17:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.371 09:17:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.371 09:17:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.371 09:17:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.371 09:17:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.371 09:17:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.371 09:17:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.371 09:17:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.371 09:17:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.371 09:17:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.371 09:17:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.371 09:17:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.371 09:17:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.371 09:17:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.371 09:17:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.371 09:17:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.371 09:17:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.371 09:17:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.371 09:17:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.371 09:17:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.371 09:17:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.371 09:17:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.371 09:17:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.371 09:17:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.371 09:17:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.371 09:17:06 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:22.371 09:17:06 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:22.371 09:17:06 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.371 00:07:22.371 real 0m1.403s 00:07:22.371 user 0m1.257s 00:07:22.371 sys 0m0.148s 00:07:22.371 09:17:06 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.371 09:17:06 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:22.371 ************************************ 00:07:22.371 END TEST accel_copy_crc32c 00:07:22.371 ************************************ 00:07:22.630 09:17:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:22.630 09:17:06 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:22.630 09:17:06 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:22.630 09:17:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.630 09:17:06 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.630 ************************************ 00:07:22.630 START TEST accel_copy_crc32c_C2 00:07:22.630 ************************************ 00:07:22.630 09:17:06 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:22.630 09:17:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:22.630 09:17:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:22.630 09:17:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.630 09:17:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:22.630 09:17:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.630 09:17:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:22.630 09:17:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:22.630 09:17:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.630 09:17:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.630 09:17:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.630 09:17:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.630 09:17:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.630 09:17:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:22.630 09:17:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:22.630 [2024-07-14 09:17:06.867943] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:22.630 [2024-07-14 09:17:06.868008] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid619094 ] 00:07:22.630 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.630 [2024-07-14 09:17:06.929239] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.630 [2024-07-14 09:17:07.022278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.889 09:17:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.824 09:17:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:23.824 09:17:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.824 09:17:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.824 09:17:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.824 09:17:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:23.824 09:17:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.824 09:17:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.824 09:17:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.824 09:17:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:23.824 09:17:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.824 09:17:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.824 09:17:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.824 09:17:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:23.824 09:17:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.824 09:17:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.824 09:17:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.824 09:17:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:23.824 09:17:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.824 09:17:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.824 09:17:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.824 09:17:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:23.824 09:17:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.824 09:17:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.824 09:17:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.824 09:17:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:23.824 09:17:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:23.824 09:17:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.824 00:07:23.824 real 0m1.411s 00:07:23.824 user 0m1.265s 00:07:23.824 sys 0m0.149s 00:07:23.824 09:17:08 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.824 09:17:08 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:23.824 ************************************ 00:07:23.825 END TEST accel_copy_crc32c_C2 00:07:23.825 ************************************ 00:07:24.083 09:17:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:24.083 09:17:08 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:24.083 09:17:08 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:24.083 09:17:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.083 09:17:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:24.083 ************************************ 00:07:24.083 START TEST accel_dualcast 00:07:24.083 ************************************ 00:07:24.083 09:17:08 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:24.083 09:17:08 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:24.083 09:17:08 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:24.083 09:17:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.083 09:17:08 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:24.083 09:17:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.083 09:17:08 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:24.083 09:17:08 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:24.083 09:17:08 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:24.083 09:17:08 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:24.083 09:17:08 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.083 09:17:08 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.083 09:17:08 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:24.083 09:17:08 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:24.083 09:17:08 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:24.083 [2024-07-14 09:17:08.325226] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:24.084 [2024-07-14 09:17:08.325289] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid619252 ] 00:07:24.084 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.084 [2024-07-14 09:17:08.382980] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.084 [2024-07-14 09:17:08.473358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.084 09:17:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:24.084 09:17:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.084 09:17:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.084 09:17:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.084 09:17:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:24.342 09:17:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:25.278 09:17:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:25.278 09:17:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:25.278 09:17:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:25.278 09:17:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:25.278 09:17:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:25.278 09:17:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:25.278 09:17:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:25.278 09:17:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:25.278 09:17:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:25.278 09:17:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:25.278 09:17:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:25.278 09:17:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:25.278 09:17:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:25.278 09:17:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:25.278 09:17:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:25.278 09:17:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:25.278 09:17:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:25.278 09:17:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:25.278 09:17:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:25.278 09:17:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:25.278 09:17:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:25.278 09:17:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:25.278 09:17:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:25.278 09:17:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:25.278 09:17:09 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:25.278 09:17:09 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:25.278 09:17:09 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.278 00:07:25.278 real 0m1.396s 00:07:25.278 user 0m1.252s 00:07:25.278 sys 0m0.147s 00:07:25.278 09:17:09 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.278 09:17:09 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:25.278 ************************************ 00:07:25.278 END TEST accel_dualcast 00:07:25.278 ************************************ 00:07:25.278 09:17:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:25.278 09:17:09 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:25.278 09:17:09 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:25.278 09:17:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.278 09:17:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.537 ************************************ 00:07:25.537 START TEST accel_compare 00:07:25.537 ************************************ 00:07:25.537 09:17:09 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:25.537 [2024-07-14 09:17:09.767966] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:25.537 [2024-07-14 09:17:09.768031] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid619412 ] 00:07:25.537 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.537 [2024-07-14 09:17:09.828554] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.537 [2024-07-14 09:17:09.921357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:25.537 09:17:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:26.911 09:17:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:26.911 09:17:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:26.911 09:17:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:26.911 09:17:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:26.911 09:17:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:26.911 09:17:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:26.911 09:17:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:26.911 09:17:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:26.911 09:17:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:26.911 09:17:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:26.911 09:17:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:26.911 09:17:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:26.911 09:17:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:26.911 09:17:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:26.911 09:17:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:26.911 09:17:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:26.911 09:17:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:26.911 09:17:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:26.911 09:17:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:26.911 09:17:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:26.911 09:17:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:26.911 09:17:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:26.911 09:17:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:26.911 09:17:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:26.911 09:17:11 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:26.911 09:17:11 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:26.911 09:17:11 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.911 00:07:26.911 real 0m1.394s 00:07:26.911 user 0m1.258s 00:07:26.911 sys 0m0.138s 00:07:26.911 09:17:11 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.911 09:17:11 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:26.911 ************************************ 00:07:26.911 END TEST accel_compare 00:07:26.911 ************************************ 00:07:26.911 09:17:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:26.911 09:17:11 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:26.911 09:17:11 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:26.911 09:17:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.911 09:17:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.911 ************************************ 00:07:26.911 START TEST accel_xor 00:07:26.911 ************************************ 00:07:26.911 09:17:11 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:26.911 09:17:11 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:26.911 09:17:11 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:26.911 09:17:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:26.911 09:17:11 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:26.911 09:17:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:26.911 09:17:11 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:26.911 09:17:11 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:26.911 09:17:11 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:26.911 09:17:11 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:26.911 09:17:11 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.911 09:17:11 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.911 09:17:11 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:26.912 09:17:11 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:26.912 09:17:11 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:26.912 [2024-07-14 09:17:11.212584] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:26.912 [2024-07-14 09:17:11.212649] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid619680 ] 00:07:26.912 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.912 [2024-07-14 09:17:11.275798] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.170 [2024-07-14 09:17:11.368433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:27.170 09:17:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:27.171 09:17:11 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:27.171 09:17:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:27.171 09:17:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:27.171 09:17:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:27.171 09:17:11 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:27.171 09:17:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:27.171 09:17:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:27.171 09:17:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:27.171 09:17:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:27.171 09:17:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:27.171 09:17:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:27.171 09:17:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:27.171 09:17:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:27.171 09:17:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:27.171 09:17:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:27.171 09:17:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.544 00:07:28.544 real 0m1.407s 00:07:28.544 user 0m1.269s 00:07:28.544 sys 0m0.140s 00:07:28.544 09:17:12 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.544 09:17:12 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:28.544 ************************************ 00:07:28.544 END TEST accel_xor 00:07:28.544 ************************************ 00:07:28.544 09:17:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:28.544 09:17:12 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:28.544 09:17:12 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:28.544 09:17:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.544 09:17:12 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.544 ************************************ 00:07:28.544 START TEST accel_xor 00:07:28.544 ************************************ 00:07:28.544 09:17:12 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:28.544 [2024-07-14 09:17:12.662258] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:28.544 [2024-07-14 09:17:12.662322] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid619841 ] 00:07:28.544 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.544 [2024-07-14 09:17:12.723691] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.544 [2024-07-14 09:17:12.816902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.544 09:17:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:28.545 09:17:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.951 09:17:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:29.951 09:17:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.951 09:17:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.951 09:17:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.951 09:17:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:29.951 09:17:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.951 09:17:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.951 09:17:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.951 09:17:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:29.951 09:17:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.951 09:17:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.951 09:17:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.951 09:17:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:29.951 09:17:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.951 09:17:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.951 09:17:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.951 09:17:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:29.951 09:17:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.951 09:17:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.951 09:17:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.951 09:17:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:29.951 09:17:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:29.951 09:17:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:29.951 09:17:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:29.951 09:17:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:29.951 09:17:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:29.951 09:17:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.951 00:07:29.951 real 0m1.409s 00:07:29.951 user 0m1.263s 00:07:29.951 sys 0m0.148s 00:07:29.951 09:17:14 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.951 09:17:14 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:29.951 ************************************ 00:07:29.951 END TEST accel_xor 00:07:29.951 ************************************ 00:07:29.951 09:17:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:29.951 09:17:14 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:29.951 09:17:14 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:29.951 09:17:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.951 09:17:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.951 ************************************ 00:07:29.951 START TEST accel_dif_verify 00:07:29.951 ************************************ 00:07:29.951 09:17:14 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:29.951 [2024-07-14 09:17:14.113566] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:29.951 [2024-07-14 09:17:14.113630] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid620001 ] 00:07:29.951 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.951 [2024-07-14 09:17:14.173433] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.951 [2024-07-14 09:17:14.266205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:29.951 09:17:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:29.952 09:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:29.952 09:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:29.952 09:17:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:29.952 09:17:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:29.952 09:17:14 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:29.952 09:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:29.952 09:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:29.952 09:17:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:29.952 09:17:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:29.952 09:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:29.952 09:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:29.952 09:17:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:29.952 09:17:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:29.952 09:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:29.952 09:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:29.952 09:17:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:29.952 09:17:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:29.952 09:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:29.952 09:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:29.952 09:17:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:29.952 09:17:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:29.952 09:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:29.952 09:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:29.952 09:17:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:29.952 09:17:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:29.952 09:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:29.952 09:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:29.952 09:17:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:29.952 09:17:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:29.952 09:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:29.952 09:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:29.952 09:17:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:29.952 09:17:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:29.952 09:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:29.952 09:17:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:31.328 09:17:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:31.328 09:17:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:31.328 09:17:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:31.328 09:17:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:31.328 09:17:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:31.328 09:17:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:31.328 09:17:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:31.328 09:17:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:31.328 09:17:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:31.328 09:17:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:31.328 09:17:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:31.328 09:17:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:31.328 09:17:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:31.328 09:17:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:31.328 09:17:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:31.328 09:17:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:31.328 09:17:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:31.328 09:17:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:31.328 09:17:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:31.328 09:17:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:31.328 09:17:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:31.328 09:17:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:31.328 09:17:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:31.328 09:17:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:31.328 09:17:15 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:31.328 09:17:15 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:31.328 09:17:15 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.328 00:07:31.328 real 0m1.408s 00:07:31.328 user 0m1.264s 00:07:31.328 sys 0m0.148s 00:07:31.328 09:17:15 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:31.328 09:17:15 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:31.328 ************************************ 00:07:31.328 END TEST accel_dif_verify 00:07:31.328 ************************************ 00:07:31.328 09:17:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:31.328 09:17:15 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:31.328 09:17:15 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:31.328 09:17:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.328 09:17:15 accel -- common/autotest_common.sh@10 -- # set +x 00:07:31.328 ************************************ 00:07:31.328 START TEST accel_dif_generate 00:07:31.328 ************************************ 00:07:31.328 09:17:15 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:31.328 [2024-07-14 09:17:15.561737] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:31.328 [2024-07-14 09:17:15.561790] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid620160 ] 00:07:31.328 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.328 [2024-07-14 09:17:15.621789] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.328 [2024-07-14 09:17:15.713478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:31.328 09:17:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:31.587 09:17:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:31.587 09:17:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:31.587 09:17:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:31.587 09:17:15 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:31.587 09:17:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:31.587 09:17:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:31.587 09:17:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:31.587 09:17:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:31.587 09:17:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:31.587 09:17:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:31.587 09:17:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:31.587 09:17:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:31.587 09:17:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:31.587 09:17:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:31.587 09:17:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:31.587 09:17:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:31.587 09:17:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:31.587 09:17:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:31.587 09:17:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:31.587 09:17:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:31.587 09:17:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:31.587 09:17:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:31.587 09:17:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:31.587 09:17:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:31.587 09:17:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:31.587 09:17:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:31.587 09:17:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:31.587 09:17:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:31.587 09:17:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:31.587 09:17:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:31.587 09:17:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:31.587 09:17:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:31.587 09:17:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:31.587 09:17:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:32.522 09:17:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:32.522 09:17:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:32.522 09:17:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:32.522 09:17:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:32.522 09:17:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:32.522 09:17:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:32.522 09:17:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:32.522 09:17:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:32.522 09:17:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:32.522 09:17:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:32.522 09:17:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:32.522 09:17:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:32.522 09:17:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:32.522 09:17:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:32.522 09:17:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:32.522 09:17:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:32.522 09:17:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:32.522 09:17:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:32.522 09:17:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:32.522 09:17:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:32.522 09:17:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:32.522 09:17:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:32.522 09:17:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:32.522 09:17:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:32.522 09:17:16 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:32.522 09:17:16 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:32.522 09:17:16 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.522 00:07:32.522 real 0m1.390s 00:07:32.522 user 0m1.266s 00:07:32.522 sys 0m0.128s 00:07:32.522 09:17:16 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.522 09:17:16 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:32.522 ************************************ 00:07:32.522 END TEST accel_dif_generate 00:07:32.523 ************************************ 00:07:32.523 09:17:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:32.523 09:17:16 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:32.523 09:17:16 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:32.523 09:17:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.523 09:17:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:32.781 ************************************ 00:07:32.781 START TEST accel_dif_generate_copy 00:07:32.781 ************************************ 00:07:32.781 09:17:16 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:32.781 09:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:32.781 09:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:32.781 09:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.781 09:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:32.781 09:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.781 09:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:32.781 09:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:32.781 09:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:32.781 09:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:32.782 09:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.782 09:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.782 09:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:32.782 09:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:32.782 09:17:16 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:32.782 [2024-07-14 09:17:16.993541] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:32.782 [2024-07-14 09:17:16.993604] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid620428 ] 00:07:32.782 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.782 [2024-07-14 09:17:17.056392] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.782 [2024-07-14 09:17:17.149371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.782 09:17:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:34.155 09:17:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:34.155 09:17:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:34.155 09:17:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:34.155 09:17:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:34.155 09:17:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:34.155 09:17:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:34.155 09:17:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:34.155 09:17:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:34.155 09:17:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:34.155 09:17:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:34.155 09:17:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:34.155 09:17:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:34.155 09:17:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:34.155 09:17:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:34.155 09:17:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:34.155 09:17:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:34.155 09:17:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:34.155 09:17:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:34.155 09:17:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:34.155 09:17:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:34.155 09:17:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:34.156 09:17:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:34.156 09:17:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:34.156 09:17:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:34.156 09:17:18 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:34.156 09:17:18 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:34.156 09:17:18 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.156 00:07:34.156 real 0m1.401s 00:07:34.156 user 0m1.268s 00:07:34.156 sys 0m0.135s 00:07:34.156 09:17:18 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.156 09:17:18 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:34.156 ************************************ 00:07:34.156 END TEST accel_dif_generate_copy 00:07:34.156 ************************************ 00:07:34.156 09:17:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:34.156 09:17:18 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:34.156 09:17:18 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:34.156 09:17:18 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:34.156 09:17:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.156 09:17:18 accel -- common/autotest_common.sh@10 -- # set +x 00:07:34.156 ************************************ 00:07:34.156 START TEST accel_comp 00:07:34.156 ************************************ 00:07:34.156 09:17:18 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:34.156 09:17:18 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:34.156 09:17:18 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:34.156 09:17:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.156 09:17:18 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:34.156 09:17:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.156 09:17:18 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:34.156 09:17:18 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:34.156 09:17:18 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:34.156 09:17:18 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:34.156 09:17:18 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.156 09:17:18 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.156 09:17:18 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:34.156 09:17:18 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:34.156 09:17:18 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:34.156 [2024-07-14 09:17:18.439631] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:34.156 [2024-07-14 09:17:18.439696] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid620583 ] 00:07:34.156 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.156 [2024-07-14 09:17:18.500740] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.156 [2024-07-14 09:17:18.593519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.414 09:17:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:34.414 09:17:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.414 09:17:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.414 09:17:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.415 09:17:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:35.788 09:17:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:35.788 09:17:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:35.788 09:17:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:35.788 09:17:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:35.788 09:17:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:35.788 09:17:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:35.788 09:17:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:35.788 09:17:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:35.788 09:17:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:35.788 09:17:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:35.788 09:17:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:35.788 09:17:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:35.788 09:17:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:35.789 09:17:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:35.789 09:17:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:35.789 09:17:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:35.789 09:17:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:35.789 09:17:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:35.789 09:17:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:35.789 09:17:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:35.789 09:17:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:35.789 09:17:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:35.789 09:17:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:35.789 09:17:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:35.789 09:17:19 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:35.789 09:17:19 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:35.789 09:17:19 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:35.789 00:07:35.789 real 0m1.398s 00:07:35.789 user 0m1.258s 00:07:35.789 sys 0m0.143s 00:07:35.789 09:17:19 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.789 09:17:19 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:35.789 ************************************ 00:07:35.789 END TEST accel_comp 00:07:35.789 ************************************ 00:07:35.789 09:17:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:35.789 09:17:19 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:35.789 09:17:19 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:35.789 09:17:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.789 09:17:19 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.789 ************************************ 00:07:35.789 START TEST accel_decomp 00:07:35.789 ************************************ 00:07:35.789 09:17:19 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:35.789 09:17:19 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:35.789 09:17:19 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:35.789 09:17:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:35.789 09:17:19 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:35.789 09:17:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:35.789 09:17:19 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:35.789 09:17:19 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:35.789 09:17:19 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.789 09:17:19 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.789 09:17:19 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.789 09:17:19 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.789 09:17:19 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.789 09:17:19 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:35.789 09:17:19 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:35.789 [2024-07-14 09:17:19.875723] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:35.789 [2024-07-14 09:17:19.875789] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid620748 ] 00:07:35.789 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.789 [2024-07-14 09:17:19.940090] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.789 [2024-07-14 09:17:20.042344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:35.789 09:17:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:37.161 09:17:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:37.161 09:17:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.161 09:17:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:37.161 09:17:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:37.161 09:17:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:37.161 09:17:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.161 09:17:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:37.161 09:17:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:37.161 09:17:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:37.161 09:17:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.161 09:17:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:37.161 09:17:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:37.161 09:17:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:37.161 09:17:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.161 09:17:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:37.161 09:17:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:37.161 09:17:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:37.161 09:17:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.161 09:17:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:37.161 09:17:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:37.162 09:17:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:37.162 09:17:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:37.162 09:17:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:37.162 09:17:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:37.162 09:17:21 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:37.162 09:17:21 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:37.162 09:17:21 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.162 00:07:37.162 real 0m1.427s 00:07:37.162 user 0m1.288s 00:07:37.162 sys 0m0.143s 00:07:37.162 09:17:21 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.162 09:17:21 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:37.162 ************************************ 00:07:37.162 END TEST accel_decomp 00:07:37.162 ************************************ 00:07:37.162 09:17:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:37.162 09:17:21 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:37.162 09:17:21 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:37.162 09:17:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.162 09:17:21 accel -- common/autotest_common.sh@10 -- # set +x 00:07:37.162 ************************************ 00:07:37.162 START TEST accel_decomp_full 00:07:37.162 ************************************ 00:07:37.162 09:17:21 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:37.162 [2024-07-14 09:17:21.354525] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:37.162 [2024-07-14 09:17:21.354591] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid620911 ] 00:07:37.162 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.162 [2024-07-14 09:17:21.417845] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.162 [2024-07-14 09:17:21.509987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:37.162 09:17:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:38.532 09:17:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:38.532 09:17:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:38.532 09:17:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:38.532 09:17:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:38.532 09:17:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:38.532 09:17:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:38.532 09:17:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:38.532 09:17:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:38.532 09:17:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:38.532 09:17:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:38.532 09:17:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:38.532 09:17:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:38.532 09:17:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:38.532 09:17:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:38.532 09:17:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:38.532 09:17:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:38.532 09:17:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:38.532 09:17:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:38.532 09:17:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:38.532 09:17:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:38.532 09:17:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:38.532 09:17:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:38.532 09:17:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:38.532 09:17:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:38.532 09:17:22 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:38.532 09:17:22 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:38.532 09:17:22 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:38.532 00:07:38.532 real 0m1.428s 00:07:38.532 user 0m1.276s 00:07:38.532 sys 0m0.155s 00:07:38.532 09:17:22 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.532 09:17:22 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:38.532 ************************************ 00:07:38.532 END TEST accel_decomp_full 00:07:38.532 ************************************ 00:07:38.532 09:17:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:38.532 09:17:22 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:38.532 09:17:22 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:38.532 09:17:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.532 09:17:22 accel -- common/autotest_common.sh@10 -- # set +x 00:07:38.532 ************************************ 00:07:38.532 START TEST accel_decomp_mcore 00:07:38.532 ************************************ 00:07:38.532 09:17:22 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:38.532 09:17:22 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:38.532 09:17:22 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:38.532 09:17:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.532 09:17:22 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:38.532 09:17:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.532 09:17:22 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:38.532 09:17:22 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:38.532 09:17:22 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.532 09:17:22 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.532 09:17:22 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.532 09:17:22 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.532 09:17:22 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.532 09:17:22 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:38.532 09:17:22 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:38.532 [2024-07-14 09:17:22.823677] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:38.532 [2024-07-14 09:17:22.823743] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid621173 ] 00:07:38.532 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.532 [2024-07-14 09:17:22.886347] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:38.532 [2024-07-14 09:17:22.981790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.532 [2024-07-14 09:17:22.981845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:38.532 [2024-07-14 09:17:22.981964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:38.532 [2024-07-14 09:17:22.981967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.790 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:38.790 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.790 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.790 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.790 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:38.790 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.790 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.790 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.790 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:38.790 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.790 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.790 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.790 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:38.790 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.790 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.790 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.790 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:38.790 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.790 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:38.791 09:17:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.163 09:17:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:40.163 09:17:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.163 09:17:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.163 09:17:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.163 09:17:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:40.163 09:17:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.163 09:17:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.163 09:17:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.163 09:17:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:40.163 09:17:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.163 09:17:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.163 09:17:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.163 09:17:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:40.163 09:17:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.163 09:17:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.163 09:17:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.163 09:17:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:40.163 09:17:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.163 09:17:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.163 09:17:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.163 09:17:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:40.163 09:17:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.163 09:17:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.163 09:17:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.163 09:17:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:40.163 09:17:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.163 09:17:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.163 09:17:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.163 09:17:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:40.163 09:17:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.163 09:17:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.163 09:17:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.163 09:17:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:40.163 09:17:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.163 09:17:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.163 09:17:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.163 09:17:24 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:40.163 09:17:24 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:40.163 09:17:24 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:40.163 00:07:40.163 real 0m1.411s 00:07:40.163 user 0m4.685s 00:07:40.163 sys 0m0.165s 00:07:40.163 09:17:24 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:40.163 09:17:24 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:40.163 ************************************ 00:07:40.163 END TEST accel_decomp_mcore 00:07:40.163 ************************************ 00:07:40.163 09:17:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:40.163 09:17:24 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:40.163 09:17:24 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:40.163 09:17:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.163 09:17:24 accel -- common/autotest_common.sh@10 -- # set +x 00:07:40.163 ************************************ 00:07:40.163 START TEST accel_decomp_full_mcore 00:07:40.163 ************************************ 00:07:40.163 09:17:24 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:40.163 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:40.163 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:40.163 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.163 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:40.163 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.163 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:40.163 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:40.163 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:40.163 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:40.163 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.163 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.163 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:40.163 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:40.163 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:40.163 [2024-07-14 09:17:24.280032] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:40.164 [2024-07-14 09:17:24.280099] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid621332 ] 00:07:40.164 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.164 [2024-07-14 09:17:24.342111] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:40.164 [2024-07-14 09:17:24.435277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.164 [2024-07-14 09:17:24.435344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:40.164 [2024-07-14 09:17:24.435442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:40.164 [2024-07-14 09:17:24.435445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.164 09:17:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.538 09:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:41.538 09:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.538 09:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.538 09:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.538 09:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:41.538 09:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.538 09:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.538 09:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.538 09:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:41.538 09:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.538 09:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.538 09:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.538 09:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:41.538 09:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.538 09:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.538 09:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.538 09:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:41.538 09:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.538 09:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.538 09:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.538 09:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:41.538 09:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.538 09:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.538 09:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.538 09:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:41.538 09:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.538 09:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.538 09:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.538 09:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:41.539 09:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.539 09:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.539 09:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.539 09:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:41.539 09:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:41.539 09:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:41.539 09:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:41.539 09:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:41.539 09:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:41.539 09:17:25 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:41.539 00:07:41.539 real 0m1.420s 00:07:41.539 user 0m4.749s 00:07:41.539 sys 0m0.145s 00:07:41.539 09:17:25 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.539 09:17:25 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:41.539 ************************************ 00:07:41.539 END TEST accel_decomp_full_mcore 00:07:41.539 ************************************ 00:07:41.539 09:17:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:41.539 09:17:25 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:41.539 09:17:25 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:41.539 09:17:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.539 09:17:25 accel -- common/autotest_common.sh@10 -- # set +x 00:07:41.539 ************************************ 00:07:41.539 START TEST accel_decomp_mthread 00:07:41.539 ************************************ 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:41.539 [2024-07-14 09:17:25.747499] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:41.539 [2024-07-14 09:17:25.747571] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid621499 ] 00:07:41.539 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.539 [2024-07-14 09:17:25.808301] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.539 [2024-07-14 09:17:25.902742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.539 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:41.540 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.540 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.540 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.540 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:41.540 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.540 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.540 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.540 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:41.540 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.540 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.540 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.540 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:41.540 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.540 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.540 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.540 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:41.540 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.540 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.540 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:41.540 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:41.540 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:41.540 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:41.540 09:17:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.912 09:17:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:42.912 09:17:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.912 09:17:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.912 09:17:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.912 09:17:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:42.912 09:17:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.912 09:17:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.912 09:17:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.912 09:17:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:42.912 09:17:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.912 09:17:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.912 09:17:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.912 09:17:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:42.912 09:17:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.912 09:17:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.912 09:17:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.912 09:17:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:42.912 09:17:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.912 09:17:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.912 09:17:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.912 09:17:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:42.912 09:17:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.912 09:17:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.912 09:17:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.912 09:17:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:42.912 09:17:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:42.912 09:17:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.912 09:17:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.912 09:17:27 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:42.913 09:17:27 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:42.913 09:17:27 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:42.913 00:07:42.913 real 0m1.408s 00:07:42.913 user 0m1.264s 00:07:42.913 sys 0m0.147s 00:07:42.913 09:17:27 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.913 09:17:27 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:42.913 ************************************ 00:07:42.913 END TEST accel_decomp_mthread 00:07:42.913 ************************************ 00:07:42.913 09:17:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:42.913 09:17:27 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:42.913 09:17:27 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:42.913 09:17:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.913 09:17:27 accel -- common/autotest_common.sh@10 -- # set +x 00:07:42.913 ************************************ 00:07:42.913 START TEST accel_decomp_full_mthread 00:07:42.913 ************************************ 00:07:42.913 09:17:27 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:42.913 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:42.913 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:42.913 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:42.913 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:42.913 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:42.913 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:42.913 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:42.913 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:42.913 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:42.913 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.913 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.913 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:42.913 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:42.913 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:42.913 [2024-07-14 09:17:27.200835] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:42.913 [2024-07-14 09:17:27.200915] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid621765 ] 00:07:42.913 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.913 [2024-07-14 09:17:27.262462] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.913 [2024-07-14 09:17:27.355266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.171 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:43.171 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.171 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.171 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.171 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:43.171 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.171 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.171 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.171 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:43.171 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.171 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.171 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.171 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:43.171 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.171 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.171 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.171 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:43.171 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.171 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.171 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.171 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:43.171 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.171 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.171 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.171 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:43.171 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.172 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:43.172 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.172 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.172 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:43.172 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.172 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.172 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.172 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:43.172 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.172 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.172 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.172 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:43.172 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.172 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:43.172 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.172 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.172 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:43.172 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.172 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.172 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.172 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:43.172 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.172 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.172 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.172 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:43.172 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.172 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.172 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.172 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:43.172 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.172 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.172 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.172 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:43.172 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.172 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.172 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.172 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:43.172 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.172 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.172 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.172 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:43.172 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.172 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.172 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:43.172 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:43.172 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:43.172 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:43.172 09:17:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.544 09:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:44.544 09:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.544 09:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.544 09:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.544 09:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:44.544 09:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.544 09:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.544 09:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.544 09:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:44.544 09:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.544 09:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.544 09:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.544 09:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:44.544 09:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.544 09:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.544 09:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.544 09:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:44.544 09:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.544 09:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.544 09:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.544 09:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:44.544 09:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.544 09:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.544 09:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.544 09:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:44.544 09:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.544 09:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.544 09:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.544 09:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:44.544 09:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:44.544 09:17:28 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:44.544 00:07:44.544 real 0m1.448s 00:07:44.544 user 0m1.302s 00:07:44.544 sys 0m0.149s 00:07:44.544 09:17:28 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.544 09:17:28 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:44.544 ************************************ 00:07:44.544 END TEST accel_decomp_full_mthread 00:07:44.544 ************************************ 00:07:44.544 09:17:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:44.544 09:17:28 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:44.544 09:17:28 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:44.544 09:17:28 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:44.545 09:17:28 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:44.545 09:17:28 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:44.545 09:17:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.545 09:17:28 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:44.545 09:17:28 accel -- common/autotest_common.sh@10 -- # set +x 00:07:44.545 09:17:28 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.545 09:17:28 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.545 09:17:28 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:44.545 09:17:28 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:44.545 09:17:28 accel -- accel/accel.sh@41 -- # jq -r . 00:07:44.545 ************************************ 00:07:44.545 START TEST accel_dif_functional_tests 00:07:44.545 ************************************ 00:07:44.545 09:17:28 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:44.545 [2024-07-14 09:17:28.717751] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:44.545 [2024-07-14 09:17:28.717822] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid621929 ] 00:07:44.545 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.545 [2024-07-14 09:17:28.784394] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:44.545 [2024-07-14 09:17:28.879722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.545 [2024-07-14 09:17:28.879788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.545 [2024-07-14 09:17:28.879791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.545 00:07:44.545 00:07:44.545 CUnit - A unit testing framework for C - Version 2.1-3 00:07:44.545 http://cunit.sourceforge.net/ 00:07:44.545 00:07:44.545 00:07:44.545 Suite: accel_dif 00:07:44.545 Test: verify: DIF generated, GUARD check ...passed 00:07:44.545 Test: verify: DIF generated, APPTAG check ...passed 00:07:44.545 Test: verify: DIF generated, REFTAG check ...passed 00:07:44.545 Test: verify: DIF not generated, GUARD check ...[2024-07-14 09:17:28.963192] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:44.545 passed 00:07:44.545 Test: verify: DIF not generated, APPTAG check ...[2024-07-14 09:17:28.963273] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:44.545 passed 00:07:44.545 Test: verify: DIF not generated, REFTAG check ...[2024-07-14 09:17:28.963305] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:44.545 passed 00:07:44.545 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:44.545 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-14 09:17:28.963374] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:44.545 passed 00:07:44.545 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:44.545 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:44.545 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:44.545 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-14 09:17:28.963507] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:44.545 passed 00:07:44.545 Test: verify copy: DIF generated, GUARD check ...passed 00:07:44.545 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:44.545 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:44.545 Test: verify copy: DIF not generated, GUARD check ...[2024-07-14 09:17:28.963647] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:44.545 passed 00:07:44.545 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-14 09:17:28.963683] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:44.545 passed 00:07:44.545 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-14 09:17:28.963714] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:44.545 passed 00:07:44.545 Test: generate copy: DIF generated, GUARD check ...passed 00:07:44.545 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:44.545 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:44.545 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:44.545 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:44.545 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:44.545 Test: generate copy: iovecs-len validate ...[2024-07-14 09:17:28.963947] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:44.545 passed 00:07:44.545 Test: generate copy: buffer alignment validate ...passed 00:07:44.545 00:07:44.545 Run Summary: Type Total Ran Passed Failed Inactive 00:07:44.545 suites 1 1 n/a 0 0 00:07:44.545 tests 26 26 26 0 0 00:07:44.545 asserts 115 115 115 0 n/a 00:07:44.545 00:07:44.545 Elapsed time = 0.002 seconds 00:07:44.804 00:07:44.804 real 0m0.497s 00:07:44.804 user 0m0.749s 00:07:44.804 sys 0m0.176s 00:07:44.804 09:17:29 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.804 09:17:29 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:44.804 ************************************ 00:07:44.804 END TEST accel_dif_functional_tests 00:07:44.804 ************************************ 00:07:44.804 09:17:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:44.804 00:07:44.804 real 0m31.714s 00:07:44.804 user 0m35.073s 00:07:44.804 sys 0m4.585s 00:07:44.804 09:17:29 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.804 09:17:29 accel -- common/autotest_common.sh@10 -- # set +x 00:07:44.804 ************************************ 00:07:44.804 END TEST accel 00:07:44.804 ************************************ 00:07:44.804 09:17:29 -- common/autotest_common.sh@1142 -- # return 0 00:07:44.804 09:17:29 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:44.804 09:17:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:44.804 09:17:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.804 09:17:29 -- common/autotest_common.sh@10 -- # set +x 00:07:44.804 ************************************ 00:07:44.804 START TEST accel_rpc 00:07:44.804 ************************************ 00:07:44.804 09:17:29 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:45.064 * Looking for test storage... 00:07:45.064 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:45.064 09:17:29 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:45.064 09:17:29 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=622000 00:07:45.064 09:17:29 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:45.064 09:17:29 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 622000 00:07:45.064 09:17:29 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 622000 ']' 00:07:45.064 09:17:29 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.064 09:17:29 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:45.064 09:17:29 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.064 09:17:29 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:45.064 09:17:29 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.064 [2024-07-14 09:17:29.349329] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:45.064 [2024-07-14 09:17:29.349444] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid622000 ] 00:07:45.064 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.064 [2024-07-14 09:17:29.411661] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.064 [2024-07-14 09:17:29.496269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.363 09:17:29 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:45.363 09:17:29 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:45.363 09:17:29 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:45.363 09:17:29 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:45.363 09:17:29 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:45.363 09:17:29 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:45.363 09:17:29 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:45.363 09:17:29 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:45.363 09:17:29 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.363 09:17:29 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.363 ************************************ 00:07:45.363 START TEST accel_assign_opcode 00:07:45.363 ************************************ 00:07:45.363 09:17:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:45.363 09:17:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:45.363 09:17:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.363 09:17:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:45.363 [2024-07-14 09:17:29.592951] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:45.363 09:17:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.363 09:17:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:45.363 09:17:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.363 09:17:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:45.363 [2024-07-14 09:17:29.600972] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:45.364 09:17:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.364 09:17:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:45.364 09:17:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.364 09:17:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:45.622 09:17:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.622 09:17:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:45.622 09:17:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:45.622 09:17:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.622 09:17:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:45.622 09:17:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:45.622 09:17:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.622 software 00:07:45.622 00:07:45.622 real 0m0.295s 00:07:45.622 user 0m0.038s 00:07:45.622 sys 0m0.008s 00:07:45.622 09:17:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.622 09:17:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:45.622 ************************************ 00:07:45.622 END TEST accel_assign_opcode 00:07:45.622 ************************************ 00:07:45.622 09:17:29 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:45.622 09:17:29 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 622000 00:07:45.622 09:17:29 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 622000 ']' 00:07:45.622 09:17:29 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 622000 00:07:45.622 09:17:29 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:45.622 09:17:29 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:45.622 09:17:29 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 622000 00:07:45.622 09:17:29 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:45.622 09:17:29 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:45.622 09:17:29 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 622000' 00:07:45.622 killing process with pid 622000 00:07:45.622 09:17:29 accel_rpc -- common/autotest_common.sh@967 -- # kill 622000 00:07:45.622 09:17:29 accel_rpc -- common/autotest_common.sh@972 -- # wait 622000 00:07:45.880 00:07:45.880 real 0m1.082s 00:07:45.880 user 0m1.014s 00:07:45.880 sys 0m0.433s 00:07:45.880 09:17:30 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.880 09:17:30 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.880 ************************************ 00:07:45.880 END TEST accel_rpc 00:07:45.880 ************************************ 00:07:46.139 09:17:30 -- common/autotest_common.sh@1142 -- # return 0 00:07:46.139 09:17:30 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:46.139 09:17:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:46.139 09:17:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.139 09:17:30 -- common/autotest_common.sh@10 -- # set +x 00:07:46.139 ************************************ 00:07:46.139 START TEST app_cmdline 00:07:46.139 ************************************ 00:07:46.139 09:17:30 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:46.139 * Looking for test storage... 00:07:46.139 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:46.139 09:17:30 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:46.139 09:17:30 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=622217 00:07:46.139 09:17:30 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:46.139 09:17:30 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 622217 00:07:46.139 09:17:30 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 622217 ']' 00:07:46.139 09:17:30 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.139 09:17:30 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:46.139 09:17:30 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.139 09:17:30 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:46.139 09:17:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:46.139 [2024-07-14 09:17:30.486587] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:46.139 [2024-07-14 09:17:30.486699] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid622217 ] 00:07:46.139 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.139 [2024-07-14 09:17:30.545099] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.398 [2024-07-14 09:17:30.631310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.656 09:17:30 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:46.656 09:17:30 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:46.656 09:17:30 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:46.914 { 00:07:46.914 "version": "SPDK v24.09-pre git sha1 719d03c6a", 00:07:46.914 "fields": { 00:07:46.914 "major": 24, 00:07:46.914 "minor": 9, 00:07:46.914 "patch": 0, 00:07:46.914 "suffix": "-pre", 00:07:46.914 "commit": "719d03c6a" 00:07:46.914 } 00:07:46.914 } 00:07:46.914 09:17:31 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:46.914 09:17:31 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:46.914 09:17:31 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:46.914 09:17:31 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:46.914 09:17:31 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:46.914 09:17:31 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.914 09:17:31 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:46.914 09:17:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:46.914 09:17:31 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:46.914 09:17:31 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.914 09:17:31 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:46.914 09:17:31 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:46.914 09:17:31 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:46.914 09:17:31 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:46.914 09:17:31 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:46.914 09:17:31 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:46.914 09:17:31 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:46.914 09:17:31 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:46.914 09:17:31 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:46.914 09:17:31 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:46.914 09:17:31 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:46.914 09:17:31 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:46.914 09:17:31 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:46.914 09:17:31 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:47.172 request: 00:07:47.172 { 00:07:47.172 "method": "env_dpdk_get_mem_stats", 00:07:47.172 "req_id": 1 00:07:47.172 } 00:07:47.172 Got JSON-RPC error response 00:07:47.172 response: 00:07:47.172 { 00:07:47.172 "code": -32601, 00:07:47.172 "message": "Method not found" 00:07:47.172 } 00:07:47.172 09:17:31 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:47.172 09:17:31 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:47.172 09:17:31 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:47.172 09:17:31 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:47.172 09:17:31 app_cmdline -- app/cmdline.sh@1 -- # killprocess 622217 00:07:47.172 09:17:31 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 622217 ']' 00:07:47.172 09:17:31 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 622217 00:07:47.172 09:17:31 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:47.172 09:17:31 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:47.172 09:17:31 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 622217 00:07:47.172 09:17:31 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:47.172 09:17:31 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:47.172 09:17:31 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 622217' 00:07:47.172 killing process with pid 622217 00:07:47.172 09:17:31 app_cmdline -- common/autotest_common.sh@967 -- # kill 622217 00:07:47.172 09:17:31 app_cmdline -- common/autotest_common.sh@972 -- # wait 622217 00:07:47.431 00:07:47.431 real 0m1.476s 00:07:47.431 user 0m1.804s 00:07:47.431 sys 0m0.463s 00:07:47.431 09:17:31 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.431 09:17:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:47.431 ************************************ 00:07:47.431 END TEST app_cmdline 00:07:47.431 ************************************ 00:07:47.431 09:17:31 -- common/autotest_common.sh@1142 -- # return 0 00:07:47.431 09:17:31 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:47.431 09:17:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:47.431 09:17:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.431 09:17:31 -- common/autotest_common.sh@10 -- # set +x 00:07:47.690 ************************************ 00:07:47.690 START TEST version 00:07:47.690 ************************************ 00:07:47.690 09:17:31 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:47.690 * Looking for test storage... 00:07:47.690 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:47.690 09:17:31 version -- app/version.sh@17 -- # get_header_version major 00:07:47.690 09:17:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:47.690 09:17:31 version -- app/version.sh@14 -- # cut -f2 00:07:47.690 09:17:31 version -- app/version.sh@14 -- # tr -d '"' 00:07:47.690 09:17:31 version -- app/version.sh@17 -- # major=24 00:07:47.690 09:17:31 version -- app/version.sh@18 -- # get_header_version minor 00:07:47.690 09:17:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:47.690 09:17:31 version -- app/version.sh@14 -- # cut -f2 00:07:47.690 09:17:31 version -- app/version.sh@14 -- # tr -d '"' 00:07:47.690 09:17:31 version -- app/version.sh@18 -- # minor=9 00:07:47.690 09:17:31 version -- app/version.sh@19 -- # get_header_version patch 00:07:47.690 09:17:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:47.690 09:17:31 version -- app/version.sh@14 -- # cut -f2 00:07:47.690 09:17:31 version -- app/version.sh@14 -- # tr -d '"' 00:07:47.690 09:17:31 version -- app/version.sh@19 -- # patch=0 00:07:47.690 09:17:31 version -- app/version.sh@20 -- # get_header_version suffix 00:07:47.690 09:17:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:47.690 09:17:31 version -- app/version.sh@14 -- # cut -f2 00:07:47.690 09:17:31 version -- app/version.sh@14 -- # tr -d '"' 00:07:47.690 09:17:31 version -- app/version.sh@20 -- # suffix=-pre 00:07:47.690 09:17:31 version -- app/version.sh@22 -- # version=24.9 00:07:47.690 09:17:31 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:47.690 09:17:31 version -- app/version.sh@28 -- # version=24.9rc0 00:07:47.690 09:17:31 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:47.690 09:17:31 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:47.690 09:17:32 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:47.690 09:17:32 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:47.690 00:07:47.690 real 0m0.109s 00:07:47.690 user 0m0.063s 00:07:47.690 sys 0m0.068s 00:07:47.690 09:17:32 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.690 09:17:32 version -- common/autotest_common.sh@10 -- # set +x 00:07:47.690 ************************************ 00:07:47.690 END TEST version 00:07:47.690 ************************************ 00:07:47.690 09:17:32 -- common/autotest_common.sh@1142 -- # return 0 00:07:47.690 09:17:32 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:47.690 09:17:32 -- spdk/autotest.sh@198 -- # uname -s 00:07:47.690 09:17:32 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:47.690 09:17:32 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:47.690 09:17:32 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:47.690 09:17:32 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:47.690 09:17:32 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:47.690 09:17:32 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:47.690 09:17:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:47.690 09:17:32 -- common/autotest_common.sh@10 -- # set +x 00:07:47.690 09:17:32 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:47.690 09:17:32 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:47.690 09:17:32 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:47.690 09:17:32 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:47.690 09:17:32 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:47.690 09:17:32 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:47.690 09:17:32 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:47.690 09:17:32 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:47.690 09:17:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.690 09:17:32 -- common/autotest_common.sh@10 -- # set +x 00:07:47.690 ************************************ 00:07:47.690 START TEST nvmf_tcp 00:07:47.690 ************************************ 00:07:47.690 09:17:32 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:47.690 * Looking for test storage... 00:07:47.690 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:47.690 09:17:32 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:47.690 09:17:32 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:47.690 09:17:32 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:47.690 09:17:32 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:47.690 09:17:32 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:47.690 09:17:32 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:47.690 09:17:32 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:47.690 09:17:32 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:47.690 09:17:32 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:47.690 09:17:32 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:47.690 09:17:32 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:47.690 09:17:32 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:47.690 09:17:32 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:47.949 09:17:32 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:47.949 09:17:32 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:47.949 09:17:32 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:47.949 09:17:32 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:47.949 09:17:32 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:47.949 09:17:32 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:47.949 09:17:32 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:47.949 09:17:32 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:47.949 09:17:32 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:47.949 09:17:32 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:47.949 09:17:32 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:47.949 09:17:32 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.949 09:17:32 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.949 09:17:32 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.949 09:17:32 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:47.949 09:17:32 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.949 09:17:32 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:47.949 09:17:32 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:47.949 09:17:32 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:47.949 09:17:32 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:47.949 09:17:32 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:47.949 09:17:32 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:47.949 09:17:32 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:47.949 09:17:32 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:47.949 09:17:32 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:47.949 09:17:32 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:47.949 09:17:32 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:47.949 09:17:32 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:47.949 09:17:32 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:47.949 09:17:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:47.949 09:17:32 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:47.949 09:17:32 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:47.950 09:17:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:47.950 09:17:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.950 09:17:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:47.950 ************************************ 00:07:47.950 START TEST nvmf_example 00:07:47.950 ************************************ 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:47.950 * Looking for test storage... 00:07:47.950 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:47.950 09:17:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:49.850 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:49.850 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:49.850 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:49.850 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:49.850 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:50.109 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:50.109 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:50.109 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:50.109 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:50.109 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:07:50.109 00:07:50.109 --- 10.0.0.2 ping statistics --- 00:07:50.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.109 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:07:50.109 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:50.109 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:50.109 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:07:50.109 00:07:50.109 --- 10.0.0.1 ping statistics --- 00:07:50.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.109 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:07:50.109 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:50.109 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:50.109 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:50.109 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:50.109 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:50.109 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:50.109 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:50.109 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:50.109 09:17:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:50.109 09:17:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:50.109 09:17:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:50.109 09:17:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:50.109 09:17:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:50.109 09:17:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:50.109 09:17:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:50.109 09:17:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=624218 00:07:50.109 09:17:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:50.109 09:17:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:50.109 09:17:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 624218 00:07:50.109 09:17:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 624218 ']' 00:07:50.109 09:17:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.109 09:17:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:50.109 09:17:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.109 09:17:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:50.109 09:17:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:50.109 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.043 09:17:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:51.043 09:17:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:07:51.043 09:17:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:51.043 09:17:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:51.043 09:17:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:51.044 09:17:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:51.044 09:17:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.044 09:17:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:51.044 09:17:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.044 09:17:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:51.044 09:17:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.044 09:17:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:51.044 09:17:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.044 09:17:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:51.044 09:17:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:51.044 09:17:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.044 09:17:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:51.044 09:17:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.044 09:17:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:51.044 09:17:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:51.044 09:17:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.044 09:17:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:51.044 09:17:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.044 09:17:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:51.044 09:17:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.044 09:17:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:51.044 09:17:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.044 09:17:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:51.044 09:17:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:51.044 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.239 Initializing NVMe Controllers 00:08:03.239 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:03.239 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:03.239 Initialization complete. Launching workers. 00:08:03.239 ======================================================== 00:08:03.239 Latency(us) 00:08:03.239 Device Information : IOPS MiB/s Average min max 00:08:03.239 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14713.40 57.47 4350.00 883.10 16375.60 00:08:03.240 ======================================================== 00:08:03.240 Total : 14713.40 57.47 4350.00 883.10 16375.60 00:08:03.240 00:08:03.240 09:17:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:03.240 09:17:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:03.240 09:17:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:03.240 09:17:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:08:03.240 09:17:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:03.240 09:17:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:08:03.240 09:17:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:03.240 09:17:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:03.240 rmmod nvme_tcp 00:08:03.240 rmmod nvme_fabrics 00:08:03.240 rmmod nvme_keyring 00:08:03.240 09:17:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:03.240 09:17:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:08:03.240 09:17:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:08:03.240 09:17:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 624218 ']' 00:08:03.240 09:17:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 624218 00:08:03.240 09:17:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 624218 ']' 00:08:03.240 09:17:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 624218 00:08:03.240 09:17:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:08:03.240 09:17:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:03.240 09:17:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 624218 00:08:03.240 09:17:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:08:03.240 09:17:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:08:03.240 09:17:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 624218' 00:08:03.240 killing process with pid 624218 00:08:03.240 09:17:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 624218 00:08:03.240 09:17:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 624218 00:08:03.240 nvmf threads initialize successfully 00:08:03.240 bdev subsystem init successfully 00:08:03.240 created a nvmf target service 00:08:03.240 create targets's poll groups done 00:08:03.240 all subsystems of target started 00:08:03.240 nvmf target is running 00:08:03.240 all subsystems of target stopped 00:08:03.240 destroy targets's poll groups done 00:08:03.240 destroyed the nvmf target service 00:08:03.240 bdev subsystem finish successfully 00:08:03.240 nvmf threads destroy successfully 00:08:03.240 09:17:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:03.240 09:17:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:03.240 09:17:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:03.240 09:17:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:03.240 09:17:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:03.240 09:17:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:03.240 09:17:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:03.240 09:17:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:03.810 09:17:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:03.810 09:17:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:03.810 09:17:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:03.810 09:17:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:03.810 00:08:03.810 real 0m15.880s 00:08:03.810 user 0m45.249s 00:08:03.810 sys 0m3.167s 00:08:03.810 09:17:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.810 09:17:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:03.810 ************************************ 00:08:03.810 END TEST nvmf_example 00:08:03.810 ************************************ 00:08:03.810 09:17:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:03.810 09:17:48 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:03.810 09:17:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:03.810 09:17:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.810 09:17:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:03.810 ************************************ 00:08:03.810 START TEST nvmf_filesystem 00:08:03.810 ************************************ 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:03.810 * Looking for test storage... 00:08:03.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:08:03.810 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:08:03.811 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:08:03.811 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:08:03.811 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:08:03.811 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:08:03.811 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:08:03.811 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:08:03.811 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:08:03.811 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:08:03.811 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:03.811 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:08:03.811 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:08:03.811 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:08:03.811 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:08:03.811 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:08:03.811 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:03.811 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:08:03.811 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:08:03.811 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:08:03.811 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:08:03.811 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:08:03.811 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:08:03.811 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:08:03.811 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:08:03.811 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:08:03.811 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:08:03.811 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:08:03.811 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:03.811 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:08:03.811 09:17:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:08:03.811 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:03.811 09:17:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:03.811 09:17:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:03.811 09:17:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:03.811 09:17:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:03.811 09:17:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:03.811 09:17:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:03.811 09:17:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:03.811 09:17:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:03.811 09:17:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:03.811 09:17:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:03.811 09:17:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:03.811 09:17:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:03.811 09:17:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:03.811 09:17:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:08:03.811 09:17:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:03.811 #define SPDK_CONFIG_H 00:08:03.811 #define SPDK_CONFIG_APPS 1 00:08:03.811 #define SPDK_CONFIG_ARCH native 00:08:03.811 #undef SPDK_CONFIG_ASAN 00:08:03.811 #undef SPDK_CONFIG_AVAHI 00:08:03.811 #undef SPDK_CONFIG_CET 00:08:03.811 #define SPDK_CONFIG_COVERAGE 1 00:08:03.811 #define SPDK_CONFIG_CROSS_PREFIX 00:08:03.811 #undef SPDK_CONFIG_CRYPTO 00:08:03.811 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:03.811 #undef SPDK_CONFIG_CUSTOMOCF 00:08:03.811 #undef SPDK_CONFIG_DAOS 00:08:03.811 #define SPDK_CONFIG_DAOS_DIR 00:08:03.811 #define SPDK_CONFIG_DEBUG 1 00:08:03.811 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:03.811 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:03.811 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:08:03.811 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:03.811 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:03.811 #undef SPDK_CONFIG_DPDK_UADK 00:08:03.811 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:03.811 #define SPDK_CONFIG_EXAMPLES 1 00:08:03.811 #undef SPDK_CONFIG_FC 00:08:03.811 #define SPDK_CONFIG_FC_PATH 00:08:03.811 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:03.811 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:03.811 #undef SPDK_CONFIG_FUSE 00:08:03.811 #undef SPDK_CONFIG_FUZZER 00:08:03.811 #define SPDK_CONFIG_FUZZER_LIB 00:08:03.811 #undef SPDK_CONFIG_GOLANG 00:08:03.811 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:03.811 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:03.811 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:03.811 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:08:03.811 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:03.811 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:03.811 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:03.811 #define SPDK_CONFIG_IDXD 1 00:08:03.811 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:03.811 #undef SPDK_CONFIG_IPSEC_MB 00:08:03.811 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:03.811 #define SPDK_CONFIG_ISAL 1 00:08:03.811 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:03.811 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:03.811 #define SPDK_CONFIG_LIBDIR 00:08:03.811 #undef SPDK_CONFIG_LTO 00:08:03.811 #define SPDK_CONFIG_MAX_LCORES 128 00:08:03.811 #define SPDK_CONFIG_NVME_CUSE 1 00:08:03.811 #undef SPDK_CONFIG_OCF 00:08:03.811 #define SPDK_CONFIG_OCF_PATH 00:08:03.811 #define SPDK_CONFIG_OPENSSL_PATH 00:08:03.811 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:03.811 #define SPDK_CONFIG_PGO_DIR 00:08:03.811 #undef SPDK_CONFIG_PGO_USE 00:08:03.811 #define SPDK_CONFIG_PREFIX /usr/local 00:08:03.811 #undef SPDK_CONFIG_RAID5F 00:08:03.811 #undef SPDK_CONFIG_RBD 00:08:03.811 #define SPDK_CONFIG_RDMA 1 00:08:03.811 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:03.811 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:03.811 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:03.811 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:03.811 #define SPDK_CONFIG_SHARED 1 00:08:03.811 #undef SPDK_CONFIG_SMA 00:08:03.811 #define SPDK_CONFIG_TESTS 1 00:08:03.811 #undef SPDK_CONFIG_TSAN 00:08:03.811 #define SPDK_CONFIG_UBLK 1 00:08:03.811 #define SPDK_CONFIG_UBSAN 1 00:08:03.811 #undef SPDK_CONFIG_UNIT_TESTS 00:08:03.811 #undef SPDK_CONFIG_URING 00:08:03.811 #define SPDK_CONFIG_URING_PATH 00:08:03.811 #undef SPDK_CONFIG_URING_ZNS 00:08:03.811 #undef SPDK_CONFIG_USDT 00:08:03.811 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:03.811 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:03.811 #define SPDK_CONFIG_VFIO_USER 1 00:08:03.812 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:03.812 #define SPDK_CONFIG_VHOST 1 00:08:03.812 #define SPDK_CONFIG_VIRTIO 1 00:08:03.812 #undef SPDK_CONFIG_VTUNE 00:08:03.812 #define SPDK_CONFIG_VTUNE_DIR 00:08:03.812 #define SPDK_CONFIG_WERROR 1 00:08:03.812 #define SPDK_CONFIG_WPDK_DIR 00:08:03.812 #undef SPDK_CONFIG_XNVME 00:08:03.812 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:08:03.812 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : v22.11.4 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:03.813 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 625925 ]] 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 625925 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.xBV33m 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.xBV33m/tests/target /tmp/spdk.xBV33m 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:08:03.814 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=953643008 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330786816 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=53452328960 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61994708992 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8542380032 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30941716480 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997352448 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12390182912 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12398944256 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8761344 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30996082688 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997356544 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=1273856 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6199463936 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6199468032 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:08:03.815 * Looking for test storage... 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=53452328960 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=10756972544 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:03.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:03.815 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:03.816 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:03.816 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:03.816 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:03.816 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:03.816 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:03.816 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:03.816 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:03.816 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:03.816 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:03.816 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:03.816 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:03.816 09:17:48 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:03.816 09:17:48 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:03.816 09:17:48 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:03.816 09:17:48 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.816 09:17:48 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.816 09:17:48 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.816 09:17:48 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:03.816 09:17:48 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.816 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:08:03.816 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:03.816 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:03.816 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:03.816 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:03.816 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:03.816 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:03.816 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:03.816 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:03.816 09:17:48 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:03.816 09:17:48 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:03.816 09:17:48 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:08:03.816 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:03.816 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:03.816 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:03.816 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:03.816 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:03.816 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:03.816 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:03.816 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:03.816 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:03.816 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:03.816 09:17:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:03.816 09:17:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:06.349 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:06.349 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:06.349 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:06.349 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:06.349 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:06.349 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:06.349 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:06.349 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:06.349 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:06.349 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:06.350 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:06.350 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:06.350 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:06.350 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:06.350 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:06.350 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:08:06.350 00:08:06.350 --- 10.0.0.2 ping statistics --- 00:08:06.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.350 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:06.350 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:06.350 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:08:06.350 00:08:06.350 --- 10.0.0.1 ping statistics --- 00:08:06.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.350 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:06.350 ************************************ 00:08:06.350 START TEST nvmf_filesystem_no_in_capsule 00:08:06.350 ************************************ 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=627552 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 627552 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 627552 ']' 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:06.350 09:17:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:06.350 [2024-07-14 09:17:50.568632] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:06.350 [2024-07-14 09:17:50.568720] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:06.351 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.351 [2024-07-14 09:17:50.637842] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:06.351 [2024-07-14 09:17:50.735010] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:06.351 [2024-07-14 09:17:50.735058] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:06.351 [2024-07-14 09:17:50.735076] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:06.351 [2024-07-14 09:17:50.735088] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:06.351 [2024-07-14 09:17:50.735097] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:06.351 [2024-07-14 09:17:50.735163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.351 [2024-07-14 09:17:50.735221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:06.351 [2024-07-14 09:17:50.735288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:06.351 [2024-07-14 09:17:50.735290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.609 09:17:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:06.609 09:17:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:08:06.609 09:17:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:06.609 09:17:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:06.609 09:17:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:06.609 09:17:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:06.609 09:17:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:06.609 09:17:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:06.609 09:17:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.609 09:17:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:06.609 [2024-07-14 09:17:50.891712] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:06.609 09:17:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.609 09:17:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:06.609 09:17:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.609 09:17:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:06.609 Malloc1 00:08:06.609 09:17:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.609 09:17:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:06.609 09:17:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.609 09:17:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:06.609 09:17:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.609 09:17:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:06.609 09:17:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.609 09:17:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:06.609 09:17:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.867 09:17:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:06.867 09:17:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.867 09:17:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:06.867 [2024-07-14 09:17:51.066159] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:06.867 09:17:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.867 09:17:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:06.867 09:17:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:08:06.867 09:17:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:06.867 09:17:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:08:06.867 09:17:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:08:06.867 09:17:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:06.867 09:17:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.867 09:17:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:06.867 09:17:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.867 09:17:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:06.867 { 00:08:06.867 "name": "Malloc1", 00:08:06.867 "aliases": [ 00:08:06.867 "3fd4efe2-f341-4fee-a355-b509fb04f31e" 00:08:06.867 ], 00:08:06.867 "product_name": "Malloc disk", 00:08:06.867 "block_size": 512, 00:08:06.867 "num_blocks": 1048576, 00:08:06.867 "uuid": "3fd4efe2-f341-4fee-a355-b509fb04f31e", 00:08:06.867 "assigned_rate_limits": { 00:08:06.867 "rw_ios_per_sec": 0, 00:08:06.867 "rw_mbytes_per_sec": 0, 00:08:06.867 "r_mbytes_per_sec": 0, 00:08:06.867 "w_mbytes_per_sec": 0 00:08:06.867 }, 00:08:06.867 "claimed": true, 00:08:06.867 "claim_type": "exclusive_write", 00:08:06.867 "zoned": false, 00:08:06.867 "supported_io_types": { 00:08:06.867 "read": true, 00:08:06.867 "write": true, 00:08:06.867 "unmap": true, 00:08:06.867 "flush": true, 00:08:06.867 "reset": true, 00:08:06.867 "nvme_admin": false, 00:08:06.867 "nvme_io": false, 00:08:06.867 "nvme_io_md": false, 00:08:06.867 "write_zeroes": true, 00:08:06.867 "zcopy": true, 00:08:06.867 "get_zone_info": false, 00:08:06.867 "zone_management": false, 00:08:06.867 "zone_append": false, 00:08:06.867 "compare": false, 00:08:06.867 "compare_and_write": false, 00:08:06.867 "abort": true, 00:08:06.867 "seek_hole": false, 00:08:06.867 "seek_data": false, 00:08:06.867 "copy": true, 00:08:06.867 "nvme_iov_md": false 00:08:06.867 }, 00:08:06.867 "memory_domains": [ 00:08:06.867 { 00:08:06.867 "dma_device_id": "system", 00:08:06.867 "dma_device_type": 1 00:08:06.867 }, 00:08:06.867 { 00:08:06.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.867 "dma_device_type": 2 00:08:06.867 } 00:08:06.867 ], 00:08:06.867 "driver_specific": {} 00:08:06.867 } 00:08:06.867 ]' 00:08:06.867 09:17:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:06.867 09:17:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:08:06.867 09:17:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:06.867 09:17:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:08:06.867 09:17:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:08:06.867 09:17:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:08:06.867 09:17:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:06.867 09:17:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:07.441 09:17:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:07.441 09:17:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:07.441 09:17:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:07.441 09:17:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:07.441 09:17:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:09.374 09:17:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:09.374 09:17:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:09.374 09:17:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:09.374 09:17:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:09.374 09:17:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:09.374 09:17:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:09.374 09:17:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:09.374 09:17:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:09.374 09:17:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:09.374 09:17:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:09.374 09:17:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:09.374 09:17:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:09.374 09:17:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:09.374 09:17:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:09.374 09:17:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:09.374 09:17:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:09.374 09:17:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:09.632 09:17:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:10.196 09:17:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:11.130 09:17:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:11.130 09:17:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:11.130 09:17:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:11.130 09:17:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.130 09:17:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:11.130 ************************************ 00:08:11.130 START TEST filesystem_ext4 00:08:11.130 ************************************ 00:08:11.130 09:17:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:11.130 09:17:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:11.130 09:17:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:11.130 09:17:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:11.130 09:17:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:11.130 09:17:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:11.130 09:17:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:11.130 09:17:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:11.130 09:17:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:11.130 09:17:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:11.130 09:17:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:11.130 mke2fs 1.46.5 (30-Dec-2021) 00:08:11.130 Discarding device blocks: 0/522240 done 00:08:11.130 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:11.130 Filesystem UUID: 8d4e9496-b8c4-44f4-a702-d07581600641 00:08:11.130 Superblock backups stored on blocks: 00:08:11.130 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:11.130 00:08:11.130 Allocating group tables: 0/64 done 00:08:11.130 Writing inode tables: 0/64 done 00:08:11.387 Creating journal (8192 blocks): done 00:08:11.387 Writing superblocks and filesystem accounting information: 0/64 done 00:08:11.387 00:08:11.387 09:17:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:11.387 09:17:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:11.387 09:17:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:11.387 09:17:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:08:11.387 09:17:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:11.387 09:17:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:08:11.387 09:17:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:11.387 09:17:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:11.644 09:17:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 627552 00:08:11.644 09:17:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:11.644 09:17:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:11.644 09:17:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:11.644 09:17:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:11.644 00:08:11.644 real 0m0.464s 00:08:11.644 user 0m0.014s 00:08:11.644 sys 0m0.060s 00:08:11.644 09:17:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:11.644 09:17:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:11.644 ************************************ 00:08:11.644 END TEST filesystem_ext4 00:08:11.644 ************************************ 00:08:11.644 09:17:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:11.644 09:17:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:11.644 09:17:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:11.644 09:17:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.644 09:17:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:11.644 ************************************ 00:08:11.644 START TEST filesystem_btrfs 00:08:11.644 ************************************ 00:08:11.644 09:17:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:11.644 09:17:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:11.644 09:17:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:11.644 09:17:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:11.644 09:17:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:11.644 09:17:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:11.644 09:17:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:11.644 09:17:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:11.644 09:17:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:11.644 09:17:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:11.644 09:17:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:11.902 btrfs-progs v6.6.2 00:08:11.902 See https://btrfs.readthedocs.io for more information. 00:08:11.902 00:08:11.902 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:11.902 NOTE: several default settings have changed in version 5.15, please make sure 00:08:11.902 this does not affect your deployments: 00:08:11.902 - DUP for metadata (-m dup) 00:08:11.902 - enabled no-holes (-O no-holes) 00:08:11.902 - enabled free-space-tree (-R free-space-tree) 00:08:11.902 00:08:11.902 Label: (null) 00:08:11.902 UUID: 96f09e5f-0f26-4854-9243-e8a17434b4be 00:08:11.902 Node size: 16384 00:08:11.902 Sector size: 4096 00:08:11.902 Filesystem size: 510.00MiB 00:08:11.902 Block group profiles: 00:08:11.902 Data: single 8.00MiB 00:08:11.902 Metadata: DUP 32.00MiB 00:08:11.902 System: DUP 8.00MiB 00:08:11.902 SSD detected: yes 00:08:11.902 Zoned device: no 00:08:11.902 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:11.902 Runtime features: free-space-tree 00:08:11.902 Checksum: crc32c 00:08:11.902 Number of devices: 1 00:08:11.902 Devices: 00:08:11.902 ID SIZE PATH 00:08:11.902 1 510.00MiB /dev/nvme0n1p1 00:08:11.902 00:08:11.902 09:17:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:11.902 09:17:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:12.159 09:17:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:12.159 09:17:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:08:12.417 09:17:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:12.417 09:17:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:08:12.417 09:17:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:12.417 09:17:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:12.417 09:17:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 627552 00:08:12.417 09:17:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:12.417 09:17:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:12.417 09:17:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:12.417 09:17:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:12.417 00:08:12.417 real 0m0.755s 00:08:12.417 user 0m0.016s 00:08:12.417 sys 0m0.124s 00:08:12.417 09:17:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:12.417 09:17:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:12.417 ************************************ 00:08:12.417 END TEST filesystem_btrfs 00:08:12.417 ************************************ 00:08:12.417 09:17:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:12.417 09:17:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:12.417 09:17:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:12.417 09:17:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.417 09:17:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:12.417 ************************************ 00:08:12.417 START TEST filesystem_xfs 00:08:12.417 ************************************ 00:08:12.417 09:17:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:12.417 09:17:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:12.417 09:17:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:12.417 09:17:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:12.417 09:17:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:12.417 09:17:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:12.417 09:17:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:12.417 09:17:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:08:12.417 09:17:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:12.417 09:17:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:12.417 09:17:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:12.417 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:12.417 = sectsz=512 attr=2, projid32bit=1 00:08:12.417 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:12.417 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:12.417 data = bsize=4096 blocks=130560, imaxpct=25 00:08:12.417 = sunit=0 swidth=0 blks 00:08:12.417 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:12.417 log =internal log bsize=4096 blocks=16384, version=2 00:08:12.418 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:12.418 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:13.348 Discarding blocks...Done. 00:08:13.348 09:17:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:13.348 09:17:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:15.875 09:17:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:15.875 09:17:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:08:15.875 09:17:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:15.875 09:17:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:08:15.875 09:17:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:08:15.875 09:17:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:15.875 09:17:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 627552 00:08:15.875 09:17:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:15.875 09:17:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:15.875 09:17:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:15.875 09:17:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:15.875 00:08:15.875 real 0m3.070s 00:08:15.875 user 0m0.012s 00:08:15.875 sys 0m0.063s 00:08:15.875 09:17:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:15.875 09:17:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:15.875 ************************************ 00:08:15.875 END TEST filesystem_xfs 00:08:15.875 ************************************ 00:08:15.875 09:17:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:15.875 09:17:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:15.875 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:15.875 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:15.875 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:15.875 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:15.875 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:15.875 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:15.875 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:15.875 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:15.875 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:15.875 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:15.875 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:15.875 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.875 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:15.875 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.875 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:15.875 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 627552 00:08:15.875 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 627552 ']' 00:08:15.875 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 627552 00:08:15.875 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:15.875 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:15.875 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 627552 00:08:15.875 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:15.875 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:15.876 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 627552' 00:08:15.876 killing process with pid 627552 00:08:15.876 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 627552 00:08:15.876 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 627552 00:08:16.134 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:16.134 00:08:16.134 real 0m10.061s 00:08:16.134 user 0m38.363s 00:08:16.134 sys 0m1.697s 00:08:16.134 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:16.134 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:16.134 ************************************ 00:08:16.134 END TEST nvmf_filesystem_no_in_capsule 00:08:16.134 ************************************ 00:08:16.393 09:18:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:16.393 09:18:00 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:16.393 09:18:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:16.393 09:18:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.393 09:18:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:16.393 ************************************ 00:08:16.393 START TEST nvmf_filesystem_in_capsule 00:08:16.393 ************************************ 00:08:16.393 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:08:16.393 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:16.393 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:16.393 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:16.393 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:16.393 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:16.393 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=629018 00:08:16.393 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:16.393 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 629018 00:08:16.393 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 629018 ']' 00:08:16.393 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.393 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:16.393 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.393 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:16.393 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:16.393 [2024-07-14 09:18:00.682985] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:16.393 [2024-07-14 09:18:00.683062] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:16.393 EAL: No free 2048 kB hugepages reported on node 1 00:08:16.393 [2024-07-14 09:18:00.749693] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:16.393 [2024-07-14 09:18:00.843013] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:16.393 [2024-07-14 09:18:00.843066] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:16.393 [2024-07-14 09:18:00.843083] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:16.393 [2024-07-14 09:18:00.843096] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:16.393 [2024-07-14 09:18:00.843109] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:16.393 [2024-07-14 09:18:00.843177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.393 [2024-07-14 09:18:00.843252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:16.393 [2024-07-14 09:18:00.843303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:16.393 [2024-07-14 09:18:00.843307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.652 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:16.652 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:08:16.652 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:16.652 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:16.652 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:16.652 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:16.652 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:16.652 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:16.652 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.652 09:18:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:16.652 [2024-07-14 09:18:00.997558] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:16.652 09:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.652 09:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:16.652 09:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.652 09:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:16.909 Malloc1 00:08:16.909 09:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.909 09:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:16.909 09:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.909 09:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:16.909 09:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.909 09:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:16.910 09:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.910 09:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:16.910 09:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.910 09:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:16.910 09:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.910 09:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:16.910 [2024-07-14 09:18:01.174686] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:16.910 09:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.910 09:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:16.910 09:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:08:16.910 09:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:16.910 09:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:08:16.910 09:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:08:16.910 09:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:16.910 09:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.910 09:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:16.910 09:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.910 09:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:16.910 { 00:08:16.910 "name": "Malloc1", 00:08:16.910 "aliases": [ 00:08:16.910 "42998ba0-4c9a-46a2-9384-cc92cc72215d" 00:08:16.910 ], 00:08:16.910 "product_name": "Malloc disk", 00:08:16.910 "block_size": 512, 00:08:16.910 "num_blocks": 1048576, 00:08:16.910 "uuid": "42998ba0-4c9a-46a2-9384-cc92cc72215d", 00:08:16.910 "assigned_rate_limits": { 00:08:16.910 "rw_ios_per_sec": 0, 00:08:16.910 "rw_mbytes_per_sec": 0, 00:08:16.910 "r_mbytes_per_sec": 0, 00:08:16.910 "w_mbytes_per_sec": 0 00:08:16.910 }, 00:08:16.910 "claimed": true, 00:08:16.910 "claim_type": "exclusive_write", 00:08:16.910 "zoned": false, 00:08:16.910 "supported_io_types": { 00:08:16.910 "read": true, 00:08:16.910 "write": true, 00:08:16.910 "unmap": true, 00:08:16.910 "flush": true, 00:08:16.910 "reset": true, 00:08:16.910 "nvme_admin": false, 00:08:16.910 "nvme_io": false, 00:08:16.910 "nvme_io_md": false, 00:08:16.910 "write_zeroes": true, 00:08:16.910 "zcopy": true, 00:08:16.910 "get_zone_info": false, 00:08:16.910 "zone_management": false, 00:08:16.910 "zone_append": false, 00:08:16.910 "compare": false, 00:08:16.910 "compare_and_write": false, 00:08:16.910 "abort": true, 00:08:16.910 "seek_hole": false, 00:08:16.910 "seek_data": false, 00:08:16.910 "copy": true, 00:08:16.910 "nvme_iov_md": false 00:08:16.910 }, 00:08:16.910 "memory_domains": [ 00:08:16.910 { 00:08:16.910 "dma_device_id": "system", 00:08:16.910 "dma_device_type": 1 00:08:16.910 }, 00:08:16.910 { 00:08:16.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.910 "dma_device_type": 2 00:08:16.910 } 00:08:16.910 ], 00:08:16.910 "driver_specific": {} 00:08:16.910 } 00:08:16.910 ]' 00:08:16.910 09:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:16.910 09:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:08:16.910 09:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:16.910 09:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:08:16.910 09:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:08:16.910 09:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:08:16.910 09:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:16.910 09:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:17.841 09:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:17.841 09:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:17.841 09:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:17.841 09:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:17.841 09:18:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:19.738 09:18:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:19.738 09:18:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:19.738 09:18:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:19.738 09:18:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:19.738 09:18:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:19.738 09:18:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:19.738 09:18:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:19.738 09:18:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:19.738 09:18:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:19.738 09:18:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:19.738 09:18:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:19.738 09:18:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:19.738 09:18:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:19.738 09:18:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:19.738 09:18:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:19.738 09:18:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:19.738 09:18:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:19.738 09:18:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:20.671 09:18:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:21.608 09:18:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:21.608 09:18:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:21.608 09:18:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:21.608 09:18:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:21.608 09:18:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:21.608 ************************************ 00:08:21.608 START TEST filesystem_in_capsule_ext4 00:08:21.608 ************************************ 00:08:21.608 09:18:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:21.608 09:18:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:21.608 09:18:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:21.608 09:18:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:21.608 09:18:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:21.608 09:18:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:21.608 09:18:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:21.608 09:18:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:21.608 09:18:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:21.608 09:18:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:21.608 09:18:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:21.608 mke2fs 1.46.5 (30-Dec-2021) 00:08:21.608 Discarding device blocks: 0/522240 done 00:08:21.608 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:21.608 Filesystem UUID: d5720178-3baa-4ca8-a1ef-25718a845833 00:08:21.608 Superblock backups stored on blocks: 00:08:21.608 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:21.608 00:08:21.608 Allocating group tables: 0/64 done 00:08:21.608 Writing inode tables: 0/64 done 00:08:22.547 Creating journal (8192 blocks): done 00:08:23.371 Writing superblocks and filesystem accounting information: 0/64 1/64 done 00:08:23.371 00:08:23.371 09:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:23.371 09:18:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:23.635 09:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:23.635 09:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:23.635 09:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:23.635 09:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:23.635 09:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:23.635 09:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:23.893 09:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 629018 00:08:23.893 09:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:23.893 09:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:23.893 09:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:23.893 09:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:23.893 00:08:23.893 real 0m2.300s 00:08:23.893 user 0m0.012s 00:08:23.893 sys 0m0.057s 00:08:23.893 09:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:23.893 09:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:23.893 ************************************ 00:08:23.893 END TEST filesystem_in_capsule_ext4 00:08:23.893 ************************************ 00:08:23.894 09:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:23.894 09:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:23.894 09:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:23.894 09:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:23.894 09:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:23.894 ************************************ 00:08:23.894 START TEST filesystem_in_capsule_btrfs 00:08:23.894 ************************************ 00:08:23.894 09:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:23.894 09:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:23.894 09:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:23.894 09:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:23.894 09:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:23.894 09:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:23.894 09:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:23.894 09:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:23.894 09:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:23.894 09:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:23.894 09:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:24.152 btrfs-progs v6.6.2 00:08:24.152 See https://btrfs.readthedocs.io for more information. 00:08:24.152 00:08:24.152 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:24.152 NOTE: several default settings have changed in version 5.15, please make sure 00:08:24.152 this does not affect your deployments: 00:08:24.152 - DUP for metadata (-m dup) 00:08:24.152 - enabled no-holes (-O no-holes) 00:08:24.152 - enabled free-space-tree (-R free-space-tree) 00:08:24.152 00:08:24.152 Label: (null) 00:08:24.152 UUID: d1c0232b-c298-4197-8641-a0c4e591115a 00:08:24.152 Node size: 16384 00:08:24.152 Sector size: 4096 00:08:24.152 Filesystem size: 510.00MiB 00:08:24.152 Block group profiles: 00:08:24.152 Data: single 8.00MiB 00:08:24.152 Metadata: DUP 32.00MiB 00:08:24.152 System: DUP 8.00MiB 00:08:24.152 SSD detected: yes 00:08:24.152 Zoned device: no 00:08:24.152 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:24.152 Runtime features: free-space-tree 00:08:24.152 Checksum: crc32c 00:08:24.152 Number of devices: 1 00:08:24.152 Devices: 00:08:24.152 ID SIZE PATH 00:08:24.152 1 510.00MiB /dev/nvme0n1p1 00:08:24.152 00:08:24.152 09:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:24.152 09:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:24.754 09:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:24.754 09:18:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:24.754 09:18:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:24.755 09:18:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:24.755 09:18:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:24.755 09:18:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:24.755 09:18:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 629018 00:08:24.755 09:18:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:24.755 09:18:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:24.755 09:18:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:24.755 09:18:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:24.755 00:08:24.755 real 0m0.882s 00:08:24.755 user 0m0.021s 00:08:24.755 sys 0m0.103s 00:08:24.755 09:18:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:24.755 09:18:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:24.755 ************************************ 00:08:24.755 END TEST filesystem_in_capsule_btrfs 00:08:24.755 ************************************ 00:08:24.755 09:18:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:24.755 09:18:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:24.755 09:18:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:24.755 09:18:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:24.755 09:18:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:24.755 ************************************ 00:08:24.755 START TEST filesystem_in_capsule_xfs 00:08:24.755 ************************************ 00:08:24.755 09:18:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:24.755 09:18:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:24.755 09:18:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:24.755 09:18:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:24.755 09:18:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:24.755 09:18:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:24.755 09:18:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:24.755 09:18:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:08:24.755 09:18:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:24.755 09:18:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:24.755 09:18:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:24.755 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:24.755 = sectsz=512 attr=2, projid32bit=1 00:08:24.755 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:24.755 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:24.755 data = bsize=4096 blocks=130560, imaxpct=25 00:08:24.755 = sunit=0 swidth=0 blks 00:08:24.755 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:24.755 log =internal log bsize=4096 blocks=16384, version=2 00:08:24.755 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:24.755 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:26.132 Discarding blocks...Done. 00:08:26.132 09:18:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:26.132 09:18:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:28.667 09:18:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:28.667 09:18:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:28.667 09:18:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:28.667 09:18:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:28.667 09:18:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:28.667 09:18:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:28.667 09:18:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 629018 00:08:28.667 09:18:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:28.667 09:18:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:28.667 09:18:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:28.667 09:18:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:28.667 00:08:28.667 real 0m3.605s 00:08:28.667 user 0m0.015s 00:08:28.667 sys 0m0.062s 00:08:28.668 09:18:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:28.668 09:18:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:28.668 ************************************ 00:08:28.668 END TEST filesystem_in_capsule_xfs 00:08:28.668 ************************************ 00:08:28.668 09:18:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:28.668 09:18:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:28.668 09:18:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:28.668 09:18:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:28.668 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:28.668 09:18:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:28.668 09:18:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:28.668 09:18:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:28.668 09:18:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:28.668 09:18:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:28.668 09:18:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:28.668 09:18:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:28.668 09:18:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:28.668 09:18:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.668 09:18:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:28.940 09:18:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.941 09:18:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:28.941 09:18:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 629018 00:08:28.941 09:18:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 629018 ']' 00:08:28.941 09:18:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 629018 00:08:28.941 09:18:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:28.941 09:18:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:28.941 09:18:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 629018 00:08:28.941 09:18:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:28.941 09:18:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:28.941 09:18:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 629018' 00:08:28.941 killing process with pid 629018 00:08:28.941 09:18:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 629018 00:08:28.941 09:18:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 629018 00:08:29.204 09:18:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:29.204 00:08:29.204 real 0m12.966s 00:08:29.204 user 0m49.817s 00:08:29.204 sys 0m1.798s 00:08:29.204 09:18:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:29.204 09:18:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:29.204 ************************************ 00:08:29.204 END TEST nvmf_filesystem_in_capsule 00:08:29.204 ************************************ 00:08:29.204 09:18:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:29.204 09:18:13 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:29.204 09:18:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:29.204 09:18:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:29.204 09:18:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:29.204 09:18:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:29.204 09:18:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:29.204 09:18:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:29.204 rmmod nvme_tcp 00:08:29.204 rmmod nvme_fabrics 00:08:29.204 rmmod nvme_keyring 00:08:29.464 09:18:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:29.464 09:18:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:29.464 09:18:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:29.464 09:18:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:29.464 09:18:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:29.464 09:18:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:29.464 09:18:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:29.464 09:18:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:29.464 09:18:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:29.464 09:18:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.464 09:18:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:29.464 09:18:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.371 09:18:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:31.371 00:08:31.371 real 0m27.617s 00:08:31.371 user 1m29.099s 00:08:31.371 sys 0m5.171s 00:08:31.371 09:18:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:31.371 09:18:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:31.371 ************************************ 00:08:31.371 END TEST nvmf_filesystem 00:08:31.371 ************************************ 00:08:31.371 09:18:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:31.371 09:18:15 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:31.371 09:18:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:31.371 09:18:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.371 09:18:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:31.371 ************************************ 00:08:31.371 START TEST nvmf_target_discovery 00:08:31.371 ************************************ 00:08:31.371 09:18:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:31.371 * Looking for test storage... 00:08:31.630 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:31.630 09:18:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:33.543 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:33.543 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:33.543 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:33.543 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:33.543 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:33.543 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:33.543 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:33.543 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:33.543 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:33.543 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:33.543 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:33.543 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:33.543 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:33.543 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:33.543 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:33.543 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:33.543 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:33.543 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:33.543 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:33.543 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:33.543 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:33.543 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:33.543 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:33.543 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:33.543 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:33.543 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:33.543 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:33.543 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:33.543 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:33.543 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:33.543 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:33.543 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:33.543 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:33.543 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:33.543 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:33.543 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:33.543 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:33.543 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:33.543 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:33.543 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:33.543 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:33.543 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:33.543 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:33.543 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:33.543 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:33.543 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:33.543 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:33.543 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:33.544 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:33.544 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:33.544 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:33.544 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:33.544 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:33.544 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:33.544 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:33.544 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:33.544 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:33.544 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:33.544 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:33.544 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:33.544 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:33.544 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:33.544 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:33.544 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:33.544 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:33.544 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:33.544 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:33.544 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:33.544 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:33.544 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:33.544 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:33.544 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:33.544 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:33.544 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:33.544 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:33.544 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:33.544 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:33.544 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:33.544 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:33.544 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:33.544 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:33.544 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:33.544 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:33.544 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:33.544 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:33.544 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:33.544 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:33.544 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:33.544 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:33.544 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:33.544 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:33.544 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:33.544 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:33.544 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:33.544 09:18:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:33.803 09:18:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:33.803 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:33.803 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:08:33.803 00:08:33.803 --- 10.0.0.2 ping statistics --- 00:08:33.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.803 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:08:33.803 09:18:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:33.803 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:33.803 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:08:33.803 00:08:33.803 --- 10.0.0.1 ping statistics --- 00:08:33.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.803 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:08:33.803 09:18:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:33.803 09:18:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:33.803 09:18:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:33.803 09:18:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:33.803 09:18:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:33.803 09:18:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:33.803 09:18:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:33.803 09:18:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:33.803 09:18:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:33.803 09:18:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:33.803 09:18:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:33.803 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:33.803 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:33.803 09:18:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=633207 00:08:33.803 09:18:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:33.803 09:18:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 633207 00:08:33.803 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 633207 ']' 00:08:33.803 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.804 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:33.804 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.804 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:33.804 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:33.804 [2024-07-14 09:18:18.091835] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:33.804 [2024-07-14 09:18:18.091935] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:33.804 EAL: No free 2048 kB hugepages reported on node 1 00:08:33.804 [2024-07-14 09:18:18.160706] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:34.063 [2024-07-14 09:18:18.256446] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:34.063 [2024-07-14 09:18:18.256496] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:34.063 [2024-07-14 09:18:18.256512] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:34.063 [2024-07-14 09:18:18.256525] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:34.063 [2024-07-14 09:18:18.256537] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:34.063 [2024-07-14 09:18:18.256617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.063 [2024-07-14 09:18:18.256672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:34.063 [2024-07-14 09:18:18.256701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:34.063 [2024-07-14 09:18:18.256708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.063 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:34.064 [2024-07-14 09:18:18.420800] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:34.064 Null1 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:34.064 [2024-07-14 09:18:18.461104] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:34.064 Null2 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:34.064 Null3 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.064 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:34.324 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.324 09:18:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:34.324 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.324 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:34.324 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.324 09:18:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:34.324 09:18:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:34.324 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.324 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:34.324 Null4 00:08:34.324 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.324 09:18:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:34.324 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.324 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:34.324 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.324 09:18:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:34.324 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.324 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:34.324 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.324 09:18:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:34.324 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.324 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:34.324 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.324 09:18:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:34.324 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.324 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:34.324 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.324 09:18:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:34.324 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.324 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:34.325 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.325 09:18:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:08:34.584 00:08:34.584 Discovery Log Number of Records 6, Generation counter 6 00:08:34.584 =====Discovery Log Entry 0====== 00:08:34.585 trtype: tcp 00:08:34.585 adrfam: ipv4 00:08:34.585 subtype: current discovery subsystem 00:08:34.585 treq: not required 00:08:34.585 portid: 0 00:08:34.585 trsvcid: 4420 00:08:34.585 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:34.585 traddr: 10.0.0.2 00:08:34.585 eflags: explicit discovery connections, duplicate discovery information 00:08:34.585 sectype: none 00:08:34.585 =====Discovery Log Entry 1====== 00:08:34.585 trtype: tcp 00:08:34.585 adrfam: ipv4 00:08:34.585 subtype: nvme subsystem 00:08:34.585 treq: not required 00:08:34.585 portid: 0 00:08:34.585 trsvcid: 4420 00:08:34.585 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:34.585 traddr: 10.0.0.2 00:08:34.585 eflags: none 00:08:34.585 sectype: none 00:08:34.585 =====Discovery Log Entry 2====== 00:08:34.585 trtype: tcp 00:08:34.585 adrfam: ipv4 00:08:34.585 subtype: nvme subsystem 00:08:34.585 treq: not required 00:08:34.585 portid: 0 00:08:34.585 trsvcid: 4420 00:08:34.585 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:34.585 traddr: 10.0.0.2 00:08:34.585 eflags: none 00:08:34.585 sectype: none 00:08:34.585 =====Discovery Log Entry 3====== 00:08:34.585 trtype: tcp 00:08:34.585 adrfam: ipv4 00:08:34.585 subtype: nvme subsystem 00:08:34.585 treq: not required 00:08:34.585 portid: 0 00:08:34.585 trsvcid: 4420 00:08:34.585 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:34.585 traddr: 10.0.0.2 00:08:34.585 eflags: none 00:08:34.585 sectype: none 00:08:34.585 =====Discovery Log Entry 4====== 00:08:34.585 trtype: tcp 00:08:34.585 adrfam: ipv4 00:08:34.585 subtype: nvme subsystem 00:08:34.585 treq: not required 00:08:34.585 portid: 0 00:08:34.585 trsvcid: 4420 00:08:34.585 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:34.585 traddr: 10.0.0.2 00:08:34.585 eflags: none 00:08:34.585 sectype: none 00:08:34.585 =====Discovery Log Entry 5====== 00:08:34.585 trtype: tcp 00:08:34.585 adrfam: ipv4 00:08:34.585 subtype: discovery subsystem referral 00:08:34.585 treq: not required 00:08:34.585 portid: 0 00:08:34.585 trsvcid: 4430 00:08:34.585 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:34.585 traddr: 10.0.0.2 00:08:34.585 eflags: none 00:08:34.585 sectype: none 00:08:34.585 09:18:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:34.585 Perform nvmf subsystem discovery via RPC 00:08:34.585 09:18:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:34.585 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.585 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:34.585 [ 00:08:34.585 { 00:08:34.585 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:34.585 "subtype": "Discovery", 00:08:34.585 "listen_addresses": [ 00:08:34.585 { 00:08:34.585 "trtype": "TCP", 00:08:34.585 "adrfam": "IPv4", 00:08:34.585 "traddr": "10.0.0.2", 00:08:34.585 "trsvcid": "4420" 00:08:34.585 } 00:08:34.585 ], 00:08:34.585 "allow_any_host": true, 00:08:34.585 "hosts": [] 00:08:34.585 }, 00:08:34.585 { 00:08:34.585 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:34.585 "subtype": "NVMe", 00:08:34.585 "listen_addresses": [ 00:08:34.585 { 00:08:34.585 "trtype": "TCP", 00:08:34.585 "adrfam": "IPv4", 00:08:34.585 "traddr": "10.0.0.2", 00:08:34.585 "trsvcid": "4420" 00:08:34.585 } 00:08:34.585 ], 00:08:34.585 "allow_any_host": true, 00:08:34.585 "hosts": [], 00:08:34.585 "serial_number": "SPDK00000000000001", 00:08:34.585 "model_number": "SPDK bdev Controller", 00:08:34.585 "max_namespaces": 32, 00:08:34.585 "min_cntlid": 1, 00:08:34.585 "max_cntlid": 65519, 00:08:34.585 "namespaces": [ 00:08:34.585 { 00:08:34.585 "nsid": 1, 00:08:34.585 "bdev_name": "Null1", 00:08:34.585 "name": "Null1", 00:08:34.585 "nguid": "B367EF5A65C44997A78BC349B1C8BB96", 00:08:34.585 "uuid": "b367ef5a-65c4-4997-a78b-c349b1c8bb96" 00:08:34.585 } 00:08:34.585 ] 00:08:34.585 }, 00:08:34.585 { 00:08:34.585 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:34.585 "subtype": "NVMe", 00:08:34.585 "listen_addresses": [ 00:08:34.585 { 00:08:34.585 "trtype": "TCP", 00:08:34.585 "adrfam": "IPv4", 00:08:34.585 "traddr": "10.0.0.2", 00:08:34.585 "trsvcid": "4420" 00:08:34.585 } 00:08:34.585 ], 00:08:34.585 "allow_any_host": true, 00:08:34.585 "hosts": [], 00:08:34.585 "serial_number": "SPDK00000000000002", 00:08:34.585 "model_number": "SPDK bdev Controller", 00:08:34.585 "max_namespaces": 32, 00:08:34.585 "min_cntlid": 1, 00:08:34.585 "max_cntlid": 65519, 00:08:34.585 "namespaces": [ 00:08:34.585 { 00:08:34.585 "nsid": 1, 00:08:34.585 "bdev_name": "Null2", 00:08:34.585 "name": "Null2", 00:08:34.585 "nguid": "A89FA9AC31DB4F35889F1E1E003F74E5", 00:08:34.585 "uuid": "a89fa9ac-31db-4f35-889f-1e1e003f74e5" 00:08:34.585 } 00:08:34.585 ] 00:08:34.585 }, 00:08:34.585 { 00:08:34.585 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:34.585 "subtype": "NVMe", 00:08:34.585 "listen_addresses": [ 00:08:34.585 { 00:08:34.585 "trtype": "TCP", 00:08:34.585 "adrfam": "IPv4", 00:08:34.585 "traddr": "10.0.0.2", 00:08:34.585 "trsvcid": "4420" 00:08:34.585 } 00:08:34.585 ], 00:08:34.585 "allow_any_host": true, 00:08:34.585 "hosts": [], 00:08:34.585 "serial_number": "SPDK00000000000003", 00:08:34.585 "model_number": "SPDK bdev Controller", 00:08:34.585 "max_namespaces": 32, 00:08:34.586 "min_cntlid": 1, 00:08:34.586 "max_cntlid": 65519, 00:08:34.586 "namespaces": [ 00:08:34.586 { 00:08:34.586 "nsid": 1, 00:08:34.586 "bdev_name": "Null3", 00:08:34.586 "name": "Null3", 00:08:34.586 "nguid": "3C3247D5CD4746A385AEA3FCB0A4A139", 00:08:34.586 "uuid": "3c3247d5-cd47-46a3-85ae-a3fcb0a4a139" 00:08:34.586 } 00:08:34.586 ] 00:08:34.586 }, 00:08:34.586 { 00:08:34.586 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:34.586 "subtype": "NVMe", 00:08:34.586 "listen_addresses": [ 00:08:34.586 { 00:08:34.586 "trtype": "TCP", 00:08:34.586 "adrfam": "IPv4", 00:08:34.586 "traddr": "10.0.0.2", 00:08:34.586 "trsvcid": "4420" 00:08:34.586 } 00:08:34.586 ], 00:08:34.586 "allow_any_host": true, 00:08:34.586 "hosts": [], 00:08:34.586 "serial_number": "SPDK00000000000004", 00:08:34.586 "model_number": "SPDK bdev Controller", 00:08:34.586 "max_namespaces": 32, 00:08:34.586 "min_cntlid": 1, 00:08:34.586 "max_cntlid": 65519, 00:08:34.586 "namespaces": [ 00:08:34.586 { 00:08:34.586 "nsid": 1, 00:08:34.586 "bdev_name": "Null4", 00:08:34.586 "name": "Null4", 00:08:34.586 "nguid": "A185ED85BE9A4EF89F39ABBAA0BB93C6", 00:08:34.586 "uuid": "a185ed85-be9a-4ef8-9f39-abbaa0bb93c6" 00:08:34.586 } 00:08:34.586 ] 00:08:34.586 } 00:08:34.586 ] 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:34.586 rmmod nvme_tcp 00:08:34.586 rmmod nvme_fabrics 00:08:34.586 rmmod nvme_keyring 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:34.586 09:18:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 633207 ']' 00:08:34.587 09:18:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 633207 00:08:34.587 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 633207 ']' 00:08:34.587 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 633207 00:08:34.587 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:08:34.587 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:34.587 09:18:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 633207 00:08:34.587 09:18:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:34.587 09:18:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:34.587 09:18:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 633207' 00:08:34.587 killing process with pid 633207 00:08:34.587 09:18:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 633207 00:08:34.587 09:18:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 633207 00:08:34.846 09:18:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:34.846 09:18:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:34.846 09:18:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:34.846 09:18:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:34.846 09:18:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:34.846 09:18:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.846 09:18:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:34.846 09:18:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.380 09:18:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:37.380 00:08:37.380 real 0m5.498s 00:08:37.380 user 0m4.614s 00:08:37.380 sys 0m1.867s 00:08:37.380 09:18:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:37.380 09:18:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:37.380 ************************************ 00:08:37.380 END TEST nvmf_target_discovery 00:08:37.380 ************************************ 00:08:37.380 09:18:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:37.380 09:18:21 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:37.380 09:18:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:37.380 09:18:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:37.380 09:18:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:37.380 ************************************ 00:08:37.380 START TEST nvmf_referrals 00:08:37.380 ************************************ 00:08:37.380 09:18:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:37.380 * Looking for test storage... 00:08:37.380 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:37.380 09:18:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:37.380 09:18:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:37.380 09:18:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:37.380 09:18:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:37.380 09:18:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:37.380 09:18:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:37.380 09:18:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:37.380 09:18:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:37.380 09:18:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:37.380 09:18:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:37.380 09:18:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:37.380 09:18:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:37.380 09:18:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:37.380 09:18:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:37.380 09:18:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:37.380 09:18:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:37.380 09:18:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:37.380 09:18:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:37.380 09:18:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:37.380 09:18:21 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:37.380 09:18:21 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:37.380 09:18:21 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:37.380 09:18:21 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.380 09:18:21 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.381 09:18:21 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.381 09:18:21 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:37.381 09:18:21 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.381 09:18:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:37.381 09:18:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:37.381 09:18:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:37.381 09:18:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:37.381 09:18:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:37.381 09:18:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:37.381 09:18:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:37.381 09:18:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:37.381 09:18:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:37.381 09:18:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:37.381 09:18:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:37.381 09:18:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:37.381 09:18:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:37.381 09:18:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:37.381 09:18:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:37.381 09:18:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:37.381 09:18:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:37.381 09:18:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:37.381 09:18:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:37.381 09:18:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:37.381 09:18:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:37.381 09:18:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.381 09:18:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:37.381 09:18:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.381 09:18:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:37.381 09:18:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:37.381 09:18:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:37.381 09:18:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:39.285 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:39.285 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:39.285 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:39.285 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:39.285 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:39.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:08:39.285 00:08:39.285 --- 10.0.0.2 ping statistics --- 00:08:39.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.285 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:39.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:39.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:08:39.285 00:08:39.285 --- 10.0.0.1 ping statistics --- 00:08:39.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.285 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:39.285 09:18:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:39.286 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=635300 00:08:39.286 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:39.286 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 635300 00:08:39.286 09:18:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 635300 ']' 00:08:39.286 09:18:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.286 09:18:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:39.286 09:18:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.286 09:18:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:39.286 09:18:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:39.286 [2024-07-14 09:18:23.633804] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:39.286 [2024-07-14 09:18:23.633901] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:39.286 EAL: No free 2048 kB hugepages reported on node 1 00:08:39.286 [2024-07-14 09:18:23.702702] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:39.544 [2024-07-14 09:18:23.798839] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:39.544 [2024-07-14 09:18:23.798912] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:39.544 [2024-07-14 09:18:23.798929] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:39.544 [2024-07-14 09:18:23.798942] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:39.544 [2024-07-14 09:18:23.798954] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:39.544 [2024-07-14 09:18:23.799011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.544 [2024-07-14 09:18:23.799053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:39.544 [2024-07-14 09:18:23.799107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:39.544 [2024-07-14 09:18:23.799110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.544 09:18:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:39.544 09:18:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:08:39.544 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:39.544 09:18:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:39.544 09:18:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:39.544 09:18:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:39.544 09:18:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:39.544 09:18:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.544 09:18:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:39.544 [2024-07-14 09:18:23.948672] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:39.544 09:18:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.544 09:18:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:39.544 09:18:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.544 09:18:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:39.544 [2024-07-14 09:18:23.960898] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:39.544 09:18:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.544 09:18:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:39.544 09:18:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.544 09:18:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:39.544 09:18:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.544 09:18:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:39.544 09:18:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.544 09:18:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:39.544 09:18:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.544 09:18:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:39.544 09:18:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.544 09:18:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:39.544 09:18:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.544 09:18:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:39.544 09:18:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:39.544 09:18:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.544 09:18:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:39.802 09:18:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.802 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:39.802 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:39.802 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:39.802 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:39.802 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:39.802 09:18:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.802 09:18:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:39.802 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:39.802 09:18:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.802 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:39.802 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:39.802 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:39.802 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:39.802 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:39.802 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:39.802 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:39.802 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:39.802 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:39.802 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:39.802 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:39.802 09:18:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.802 09:18:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:39.802 09:18:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.802 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:39.802 09:18:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.802 09:18:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:39.802 09:18:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.802 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:39.802 09:18:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.802 09:18:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:39.802 09:18:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.802 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:39.802 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:39.802 09:18:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.802 09:18:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:39.802 09:18:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.062 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:40.062 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:40.062 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:40.062 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:40.062 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:40.062 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:40.062 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:40.062 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:40.062 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:40.062 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:40.062 09:18:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.062 09:18:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:40.062 09:18:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.062 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:40.062 09:18:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.062 09:18:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:40.062 09:18:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.062 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:40.062 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:40.062 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:40.062 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:40.062 09:18:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.062 09:18:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:40.062 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:40.062 09:18:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.062 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:40.062 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:40.062 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:40.062 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:40.062 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:40.062 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:40.062 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:40.062 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:40.320 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:40.320 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:40.320 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:40.320 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:40.320 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:40.320 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:40.320 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:40.320 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:40.320 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:40.320 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:40.320 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:40.320 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:40.320 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:40.320 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:40.320 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:40.320 09:18:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.320 09:18:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:40.320 09:18:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.320 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:40.320 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:40.320 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:40.320 09:18:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.320 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:40.320 09:18:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:40.320 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:40.320 09:18:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.578 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:40.578 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:40.578 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:40.578 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:40.578 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:40.578 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:40.578 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:40.578 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:40.578 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:40.578 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:40.579 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:40.579 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:40.579 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:40.579 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:40.579 09:18:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:40.837 09:18:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:40.837 09:18:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:40.837 09:18:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:40.837 09:18:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:40.837 09:18:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:40.837 09:18:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:40.837 09:18:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:40.837 09:18:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:40.837 09:18:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.837 09:18:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:40.837 09:18:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.837 09:18:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:40.837 09:18:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:40.837 09:18:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.837 09:18:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:40.837 09:18:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.837 09:18:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:40.837 09:18:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:40.837 09:18:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:40.837 09:18:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:40.837 09:18:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:40.837 09:18:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:40.838 09:18:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:40.838 09:18:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:40.838 09:18:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:40.838 09:18:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:40.838 09:18:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:40.838 09:18:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:40.838 09:18:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:40.838 09:18:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:40.838 09:18:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:40.838 09:18:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:40.838 09:18:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:40.838 rmmod nvme_tcp 00:08:41.155 rmmod nvme_fabrics 00:08:41.155 rmmod nvme_keyring 00:08:41.155 09:18:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:41.155 09:18:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:41.155 09:18:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:41.155 09:18:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 635300 ']' 00:08:41.155 09:18:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 635300 00:08:41.155 09:18:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 635300 ']' 00:08:41.155 09:18:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 635300 00:08:41.155 09:18:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:08:41.155 09:18:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:41.155 09:18:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 635300 00:08:41.155 09:18:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:41.155 09:18:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:41.155 09:18:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 635300' 00:08:41.155 killing process with pid 635300 00:08:41.155 09:18:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 635300 00:08:41.155 09:18:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 635300 00:08:41.417 09:18:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:41.417 09:18:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:41.417 09:18:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:41.417 09:18:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:41.417 09:18:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:41.417 09:18:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.417 09:18:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:41.417 09:18:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.326 09:18:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:43.326 00:08:43.326 real 0m6.313s 00:08:43.326 user 0m8.467s 00:08:43.326 sys 0m2.146s 00:08:43.326 09:18:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:43.326 09:18:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:43.326 ************************************ 00:08:43.326 END TEST nvmf_referrals 00:08:43.326 ************************************ 00:08:43.326 09:18:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:43.326 09:18:27 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:43.326 09:18:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:43.326 09:18:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:43.326 09:18:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:43.327 ************************************ 00:08:43.327 START TEST nvmf_connect_disconnect 00:08:43.327 ************************************ 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:43.327 * Looking for test storage... 00:08:43.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:43.327 09:18:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:45.859 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:45.859 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:45.860 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:45.860 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:45.860 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:45.860 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:45.860 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:45.860 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:08:45.860 00:08:45.860 --- 10.0.0.2 ping statistics --- 00:08:45.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.860 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:45.860 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:45.860 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:08:45.860 00:08:45.860 --- 10.0.0.1 ping statistics --- 00:08:45.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.860 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=637588 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 637588 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 637588 ']' 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:45.860 09:18:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:45.860 [2024-07-14 09:18:29.953248] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:45.860 [2024-07-14 09:18:29.953327] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.860 EAL: No free 2048 kB hugepages reported on node 1 00:08:45.860 [2024-07-14 09:18:30.026085] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:45.860 [2024-07-14 09:18:30.123931] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:45.860 [2024-07-14 09:18:30.123995] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:45.860 [2024-07-14 09:18:30.124012] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:45.860 [2024-07-14 09:18:30.124025] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:45.861 [2024-07-14 09:18:30.124036] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:45.861 [2024-07-14 09:18:30.124096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.861 [2024-07-14 09:18:30.124153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:45.861 [2024-07-14 09:18:30.124207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:45.861 [2024-07-14 09:18:30.124210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.861 09:18:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:45.861 09:18:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:08:45.861 09:18:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:45.861 09:18:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:45.861 09:18:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:45.861 09:18:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:45.861 09:18:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:45.861 09:18:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.861 09:18:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:45.861 [2024-07-14 09:18:30.287966] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:45.861 09:18:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.861 09:18:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:45.861 09:18:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.861 09:18:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:46.121 09:18:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.121 09:18:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:46.121 09:18:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:46.121 09:18:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.121 09:18:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:46.121 09:18:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.121 09:18:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:46.121 09:18:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.121 09:18:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:46.121 09:18:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.121 09:18:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:46.121 09:18:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.121 09:18:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:46.121 [2024-07-14 09:18:30.345991] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:46.121 09:18:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.121 09:18:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:46.121 09:18:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:46.121 09:18:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:46.121 09:18:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:48.652 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:50.554 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.092 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.624 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.530 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.070 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:02.019 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.554 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.090 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.995 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.531 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.438 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.508 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.412 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.017 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.553 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.460 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.994 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.526 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.962 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.500 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.407 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.016 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.999 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.903 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.436 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.343 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.865 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.393 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.319 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.841 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.368 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.891 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.788 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.316 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.841 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.733 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.815 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.709 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.231 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.756 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.654 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.200 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.725 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.692 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.217 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.113 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.638 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.166 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.064 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.587 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.109 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.033 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.566 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.993 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.519 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.418 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.955 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.856 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.380 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.936 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.829 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.348 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.872 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.301 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.724 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.252 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.208 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.255 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.150 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.570 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.112 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.639 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.097 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.516 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.039 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.562 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.872 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.425 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.846 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.266 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.791 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.791 09:22:20 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:36.791 09:22:20 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:36.791 09:22:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:36.791 09:22:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:36.791 09:22:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:36.791 09:22:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:36.791 09:22:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:36.791 09:22:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:36.791 rmmod nvme_tcp 00:12:36.791 rmmod nvme_fabrics 00:12:36.791 rmmod nvme_keyring 00:12:36.791 09:22:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:36.791 09:22:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:36.791 09:22:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:36.791 09:22:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 637588 ']' 00:12:36.791 09:22:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 637588 00:12:36.791 09:22:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 637588 ']' 00:12:36.791 09:22:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 637588 00:12:36.791 09:22:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:12:36.791 09:22:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:36.791 09:22:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 637588 00:12:36.791 09:22:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:36.791 09:22:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:36.791 09:22:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 637588' 00:12:36.791 killing process with pid 637588 00:12:36.791 09:22:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 637588 00:12:36.791 09:22:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 637588 00:12:37.050 09:22:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:37.050 09:22:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:37.050 09:22:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:37.050 09:22:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:37.050 09:22:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:37.050 09:22:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.050 09:22:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:37.050 09:22:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:38.976 09:22:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:38.976 00:12:38.976 real 3m55.662s 00:12:38.976 user 14m57.377s 00:12:38.976 sys 0m34.653s 00:12:38.976 09:22:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:38.976 09:22:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:38.976 ************************************ 00:12:38.976 END TEST nvmf_connect_disconnect 00:12:38.976 ************************************ 00:12:38.976 09:22:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:38.976 09:22:23 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:38.976 09:22:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:38.976 09:22:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:38.976 09:22:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:38.976 ************************************ 00:12:38.976 START TEST nvmf_multitarget 00:12:38.976 ************************************ 00:12:38.976 09:22:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:39.233 * Looking for test storage... 00:12:39.233 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:39.233 09:22:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:39.233 09:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:39.233 09:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:39.233 09:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:39.233 09:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:39.233 09:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:39.233 09:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:39.233 09:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:39.233 09:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:39.233 09:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:39.233 09:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:39.234 09:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:39.234 09:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:39.234 09:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:39.234 09:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:39.234 09:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:39.234 09:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:39.234 09:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:39.234 09:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:39.234 09:22:23 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:39.234 09:22:23 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:39.234 09:22:23 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:39.234 09:22:23 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.234 09:22:23 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.234 09:22:23 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.234 09:22:23 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:39.234 09:22:23 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.234 09:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:39.234 09:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:39.234 09:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:39.234 09:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:39.234 09:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:39.234 09:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:39.234 09:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:39.234 09:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:39.234 09:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:39.234 09:22:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:39.234 09:22:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:39.234 09:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:39.234 09:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:39.234 09:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:39.234 09:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:39.234 09:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:39.234 09:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.234 09:22:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:39.234 09:22:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.234 09:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:39.234 09:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:39.234 09:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:12:39.234 09:22:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:41.180 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:41.180 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:12:41.180 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:41.180 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:41.180 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:41.180 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:41.180 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:41.180 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:12:41.180 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:41.180 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:12:41.180 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:12:41.180 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:12:41.180 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:12:41.180 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:12:41.180 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:12:41.180 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:41.180 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:41.180 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:41.180 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:41.180 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:41.180 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:41.180 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:41.180 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:41.180 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:41.180 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:41.180 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:41.180 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:41.180 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:41.180 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:41.180 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:41.180 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:41.180 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:41.180 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:41.180 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:41.180 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:41.180 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:41.180 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:41.180 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:41.180 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:41.180 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:41.180 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:41.180 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:41.180 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:41.180 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:41.180 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:41.180 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:41.180 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:41.180 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:41.180 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:41.180 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:41.180 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:41.180 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:41.181 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:41.181 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:41.181 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:41.181 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:12:41.181 00:12:41.181 --- 10.0.0.2 ping statistics --- 00:12:41.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.181 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:41.181 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:41.181 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:12:41.181 00:12:41.181 --- 10.0.0.1 ping statistics --- 00:12:41.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.181 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=668566 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 668566 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 668566 ']' 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:41.181 09:22:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:41.181 [2024-07-14 09:22:25.593789] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:12:41.181 [2024-07-14 09:22:25.593888] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:41.181 EAL: No free 2048 kB hugepages reported on node 1 00:12:41.439 [2024-07-14 09:22:25.667830] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:41.439 [2024-07-14 09:22:25.767607] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:41.439 [2024-07-14 09:22:25.767664] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:41.439 [2024-07-14 09:22:25.767681] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:41.439 [2024-07-14 09:22:25.767694] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:41.439 [2024-07-14 09:22:25.767706] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:41.439 [2024-07-14 09:22:25.768890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:41.439 [2024-07-14 09:22:25.768930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:41.439 [2024-07-14 09:22:25.768985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:41.439 [2024-07-14 09:22:25.768989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.696 09:22:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:41.696 09:22:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:12:41.696 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:41.696 09:22:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:41.696 09:22:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:41.696 09:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:41.696 09:22:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:41.697 09:22:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:41.697 09:22:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:41.697 09:22:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:41.697 09:22:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:41.954 "nvmf_tgt_1" 00:12:41.954 09:22:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:41.954 "nvmf_tgt_2" 00:12:41.954 09:22:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:41.954 09:22:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:42.212 09:22:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:42.212 09:22:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:42.212 true 00:12:42.212 09:22:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:42.212 true 00:12:42.212 09:22:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:42.212 09:22:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:42.471 09:22:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:42.471 09:22:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:42.471 09:22:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:42.471 09:22:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:42.471 09:22:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:42.471 09:22:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:42.471 09:22:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:42.471 09:22:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:42.471 09:22:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:42.471 rmmod nvme_tcp 00:12:42.471 rmmod nvme_fabrics 00:12:42.471 rmmod nvme_keyring 00:12:42.471 09:22:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:42.471 09:22:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:42.471 09:22:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:42.471 09:22:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 668566 ']' 00:12:42.471 09:22:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 668566 00:12:42.471 09:22:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 668566 ']' 00:12:42.471 09:22:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 668566 00:12:42.471 09:22:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:12:42.471 09:22:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:42.471 09:22:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 668566 00:12:42.471 09:22:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:42.471 09:22:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:42.471 09:22:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 668566' 00:12:42.471 killing process with pid 668566 00:12:42.471 09:22:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 668566 00:12:42.471 09:22:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 668566 00:12:42.739 09:22:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:42.739 09:22:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:42.739 09:22:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:42.739 09:22:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:42.739 09:22:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:42.739 09:22:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.739 09:22:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:42.739 09:22:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.691 09:22:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:44.691 00:12:44.691 real 0m5.705s 00:12:44.691 user 0m6.554s 00:12:44.691 sys 0m1.936s 00:12:44.691 09:22:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:44.691 09:22:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:44.691 ************************************ 00:12:44.691 END TEST nvmf_multitarget 00:12:44.691 ************************************ 00:12:44.691 09:22:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:44.691 09:22:29 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:44.691 09:22:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:44.691 09:22:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:44.691 09:22:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:44.949 ************************************ 00:12:44.949 START TEST nvmf_rpc 00:12:44.949 ************************************ 00:12:44.949 09:22:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:44.949 * Looking for test storage... 00:12:44.949 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:44.949 09:22:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:44.949 09:22:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:44.949 09:22:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:44.949 09:22:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:44.949 09:22:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:44.949 09:22:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:44.949 09:22:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:44.949 09:22:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:44.949 09:22:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:44.949 09:22:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:44.949 09:22:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:44.949 09:22:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:44.949 09:22:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:44.949 09:22:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:44.949 09:22:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:44.949 09:22:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:44.949 09:22:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:44.949 09:22:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:44.949 09:22:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:44.949 09:22:29 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:44.949 09:22:29 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:44.949 09:22:29 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:44.949 09:22:29 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.949 09:22:29 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.949 09:22:29 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.949 09:22:29 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:44.950 09:22:29 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.950 09:22:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:44.950 09:22:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:44.950 09:22:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:44.950 09:22:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:44.950 09:22:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:44.950 09:22:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:44.950 09:22:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:44.950 09:22:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:44.950 09:22:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:44.950 09:22:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:44.950 09:22:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:44.950 09:22:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:44.950 09:22:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:44.950 09:22:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:44.950 09:22:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:44.950 09:22:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:44.950 09:22:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.950 09:22:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:44.950 09:22:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.950 09:22:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:44.950 09:22:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:44.950 09:22:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:12:44.950 09:22:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:46.854 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:46.854 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:46.854 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:46.854 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:46.854 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:46.854 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:12:46.854 00:12:46.854 --- 10.0.0.2 ping statistics --- 00:12:46.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.854 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:12:46.854 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:46.854 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:46.854 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:12:46.854 00:12:46.854 --- 10.0.0.1 ping statistics --- 00:12:46.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.855 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:12:46.855 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:46.855 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:12:46.855 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:46.855 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:46.855 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:46.855 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:46.855 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:46.855 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:46.855 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:46.855 09:22:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:46.855 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:46.855 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:46.855 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.855 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=670665 00:12:46.855 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:46.855 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 670665 00:12:46.855 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 670665 ']' 00:12:46.855 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.855 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:46.855 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.855 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:46.855 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.113 [2024-07-14 09:22:31.349756] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:12:47.113 [2024-07-14 09:22:31.349840] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:47.113 EAL: No free 2048 kB hugepages reported on node 1 00:12:47.113 [2024-07-14 09:22:31.419891] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:47.113 [2024-07-14 09:22:31.513732] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:47.113 [2024-07-14 09:22:31.513804] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:47.113 [2024-07-14 09:22:31.513820] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:47.113 [2024-07-14 09:22:31.513833] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:47.113 [2024-07-14 09:22:31.513845] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:47.113 [2024-07-14 09:22:31.513923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:47.113 [2024-07-14 09:22:31.513979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:47.113 [2024-07-14 09:22:31.514034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:47.113 [2024-07-14 09:22:31.514037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.372 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:47.372 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:12:47.372 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:47.372 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:47.372 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.372 09:22:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:47.372 09:22:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:47.372 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.372 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.372 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.372 09:22:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:47.372 "tick_rate": 2700000000, 00:12:47.372 "poll_groups": [ 00:12:47.372 { 00:12:47.372 "name": "nvmf_tgt_poll_group_000", 00:12:47.372 "admin_qpairs": 0, 00:12:47.372 "io_qpairs": 0, 00:12:47.372 "current_admin_qpairs": 0, 00:12:47.372 "current_io_qpairs": 0, 00:12:47.372 "pending_bdev_io": 0, 00:12:47.372 "completed_nvme_io": 0, 00:12:47.372 "transports": [] 00:12:47.372 }, 00:12:47.372 { 00:12:47.372 "name": "nvmf_tgt_poll_group_001", 00:12:47.372 "admin_qpairs": 0, 00:12:47.372 "io_qpairs": 0, 00:12:47.372 "current_admin_qpairs": 0, 00:12:47.372 "current_io_qpairs": 0, 00:12:47.372 "pending_bdev_io": 0, 00:12:47.372 "completed_nvme_io": 0, 00:12:47.372 "transports": [] 00:12:47.372 }, 00:12:47.372 { 00:12:47.372 "name": "nvmf_tgt_poll_group_002", 00:12:47.372 "admin_qpairs": 0, 00:12:47.372 "io_qpairs": 0, 00:12:47.372 "current_admin_qpairs": 0, 00:12:47.372 "current_io_qpairs": 0, 00:12:47.372 "pending_bdev_io": 0, 00:12:47.372 "completed_nvme_io": 0, 00:12:47.372 "transports": [] 00:12:47.372 }, 00:12:47.372 { 00:12:47.372 "name": "nvmf_tgt_poll_group_003", 00:12:47.372 "admin_qpairs": 0, 00:12:47.372 "io_qpairs": 0, 00:12:47.372 "current_admin_qpairs": 0, 00:12:47.372 "current_io_qpairs": 0, 00:12:47.372 "pending_bdev_io": 0, 00:12:47.372 "completed_nvme_io": 0, 00:12:47.372 "transports": [] 00:12:47.372 } 00:12:47.372 ] 00:12:47.372 }' 00:12:47.372 09:22:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:47.372 09:22:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:47.372 09:22:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:47.372 09:22:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:47.372 09:22:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:47.372 09:22:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:47.372 09:22:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:47.372 09:22:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:47.372 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.372 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.372 [2024-07-14 09:22:31.775277] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:47.372 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.372 09:22:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:47.372 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.372 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.372 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.372 09:22:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:47.372 "tick_rate": 2700000000, 00:12:47.372 "poll_groups": [ 00:12:47.372 { 00:12:47.372 "name": "nvmf_tgt_poll_group_000", 00:12:47.372 "admin_qpairs": 0, 00:12:47.372 "io_qpairs": 0, 00:12:47.372 "current_admin_qpairs": 0, 00:12:47.372 "current_io_qpairs": 0, 00:12:47.372 "pending_bdev_io": 0, 00:12:47.372 "completed_nvme_io": 0, 00:12:47.372 "transports": [ 00:12:47.372 { 00:12:47.372 "trtype": "TCP" 00:12:47.372 } 00:12:47.372 ] 00:12:47.372 }, 00:12:47.372 { 00:12:47.372 "name": "nvmf_tgt_poll_group_001", 00:12:47.372 "admin_qpairs": 0, 00:12:47.372 "io_qpairs": 0, 00:12:47.372 "current_admin_qpairs": 0, 00:12:47.372 "current_io_qpairs": 0, 00:12:47.372 "pending_bdev_io": 0, 00:12:47.372 "completed_nvme_io": 0, 00:12:47.372 "transports": [ 00:12:47.372 { 00:12:47.372 "trtype": "TCP" 00:12:47.372 } 00:12:47.372 ] 00:12:47.372 }, 00:12:47.372 { 00:12:47.372 "name": "nvmf_tgt_poll_group_002", 00:12:47.372 "admin_qpairs": 0, 00:12:47.372 "io_qpairs": 0, 00:12:47.372 "current_admin_qpairs": 0, 00:12:47.372 "current_io_qpairs": 0, 00:12:47.372 "pending_bdev_io": 0, 00:12:47.372 "completed_nvme_io": 0, 00:12:47.372 "transports": [ 00:12:47.372 { 00:12:47.372 "trtype": "TCP" 00:12:47.372 } 00:12:47.372 ] 00:12:47.372 }, 00:12:47.372 { 00:12:47.372 "name": "nvmf_tgt_poll_group_003", 00:12:47.372 "admin_qpairs": 0, 00:12:47.372 "io_qpairs": 0, 00:12:47.372 "current_admin_qpairs": 0, 00:12:47.372 "current_io_qpairs": 0, 00:12:47.372 "pending_bdev_io": 0, 00:12:47.372 "completed_nvme_io": 0, 00:12:47.372 "transports": [ 00:12:47.372 { 00:12:47.372 "trtype": "TCP" 00:12:47.372 } 00:12:47.372 ] 00:12:47.372 } 00:12:47.372 ] 00:12:47.372 }' 00:12:47.372 09:22:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:47.372 09:22:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:47.372 09:22:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:47.372 09:22:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:47.631 09:22:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:47.631 09:22:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:47.631 09:22:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:47.631 09:22:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:47.631 09:22:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:47.631 09:22:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:47.631 09:22:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:47.631 09:22:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:47.631 09:22:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:47.631 09:22:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:47.631 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.631 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.631 Malloc1 00:12:47.631 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.631 09:22:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:47.631 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.631 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.631 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.631 09:22:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:47.631 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.631 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.631 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.631 09:22:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:47.631 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.631 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.631 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.631 09:22:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:47.631 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.631 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.631 [2024-07-14 09:22:31.933016] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:47.631 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.631 09:22:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:47.631 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:47.631 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:47.631 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:47.631 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:47.631 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:47.631 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:47.631 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:47.631 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:47.631 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:47.631 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:47.631 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:47.631 [2024-07-14 09:22:31.955531] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:47.631 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:47.631 could not add new controller: failed to write to nvme-fabrics device 00:12:47.631 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:47.631 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:47.631 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:47.631 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:47.631 09:22:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:47.631 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.631 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.631 09:22:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.631 09:22:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:48.565 09:22:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:48.565 09:22:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:48.565 09:22:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:48.565 09:22:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:48.565 09:22:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:50.460 09:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:50.460 09:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:50.460 09:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:50.460 09:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:50.460 09:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:50.460 09:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:50.460 09:22:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:50.460 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.460 09:22:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:50.460 09:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:50.460 09:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:50.460 09:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:50.460 09:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:50.460 09:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:50.460 09:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:50.460 09:22:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:50.460 09:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.460 09:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.460 09:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.460 09:22:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:50.460 09:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:50.460 09:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:50.460 09:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:50.460 09:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:50.460 09:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:50.460 09:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:50.460 09:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:50.460 09:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:50.460 09:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:50.460 09:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:50.460 09:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:50.460 [2024-07-14 09:22:34.780994] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:50.460 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:50.460 could not add new controller: failed to write to nvme-fabrics device 00:12:50.460 09:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:50.460 09:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:50.460 09:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:50.460 09:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:50.460 09:22:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:50.460 09:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.460 09:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.460 09:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.460 09:22:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:51.035 09:22:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:51.035 09:22:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:51.035 09:22:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:51.035 09:22:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:51.035 09:22:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:53.563 09:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:53.563 09:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:53.563 09:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:53.563 09:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:53.563 09:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:53.563 09:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:53.563 09:22:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:53.563 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.563 09:22:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:53.563 09:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:53.563 09:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:53.563 09:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.563 09:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:53.563 09:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.563 09:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:53.563 09:22:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:53.563 09:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.563 09:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.563 09:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.563 09:22:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:53.563 09:22:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:53.563 09:22:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:53.563 09:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.563 09:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.563 09:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.563 09:22:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:53.563 09:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.563 09:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.563 [2024-07-14 09:22:37.596747] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.563 09:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.563 09:22:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:53.563 09:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.563 09:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.563 09:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.563 09:22:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:53.563 09:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.563 09:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.563 09:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.563 09:22:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:54.127 09:22:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:54.127 09:22:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:54.127 09:22:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:54.127 09:22:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:54.127 09:22:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:56.023 09:22:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:56.023 09:22:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:56.023 09:22:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:56.023 09:22:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:56.023 09:22:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:56.023 09:22:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:56.023 09:22:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:56.023 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.023 09:22:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:56.023 09:22:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:56.023 09:22:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:56.023 09:22:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.023 09:22:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:56.023 09:22:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.023 09:22:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:56.023 09:22:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:56.023 09:22:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.024 09:22:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.024 09:22:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.024 09:22:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:56.024 09:22:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.024 09:22:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.024 09:22:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.024 09:22:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:56.024 09:22:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:56.024 09:22:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.024 09:22:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.024 09:22:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.024 09:22:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:56.024 09:22:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.024 09:22:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.024 [2024-07-14 09:22:40.413927] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:56.024 09:22:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.024 09:22:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:56.024 09:22:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.024 09:22:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.024 09:22:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.024 09:22:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:56.024 09:22:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.024 09:22:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.024 09:22:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.024 09:22:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:56.955 09:22:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:56.955 09:22:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:56.955 09:22:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:56.955 09:22:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:56.955 09:22:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:58.853 09:22:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:58.853 09:22:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:58.853 09:22:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:58.853 09:22:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:58.853 09:22:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:58.853 09:22:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:58.853 09:22:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:58.853 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.853 09:22:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:58.853 09:22:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:58.853 09:22:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:58.853 09:22:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.853 09:22:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:58.853 09:22:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.853 09:22:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:58.853 09:22:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:58.853 09:22:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.853 09:22:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.110 09:22:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.110 09:22:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:59.110 09:22:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.110 09:22:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.110 09:22:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.110 09:22:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:59.110 09:22:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:59.110 09:22:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.110 09:22:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.110 09:22:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.110 09:22:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:59.110 09:22:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.110 09:22:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.110 [2024-07-14 09:22:43.326694] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:59.110 09:22:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.110 09:22:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:59.110 09:22:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.110 09:22:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.110 09:22:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.110 09:22:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:59.111 09:22:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.111 09:22:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.111 09:22:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.111 09:22:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:59.675 09:22:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:59.675 09:22:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:59.675 09:22:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:59.675 09:22:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:59.675 09:22:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:01.606 09:22:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:01.606 09:22:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:01.606 09:22:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:01.606 09:22:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:01.606 09:22:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:01.606 09:22:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:01.606 09:22:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:01.864 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.864 09:22:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:01.864 09:22:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:01.864 09:22:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:01.864 09:22:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.864 09:22:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:01.864 09:22:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.864 09:22:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:01.864 09:22:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:01.864 09:22:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.864 09:22:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.864 09:22:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.864 09:22:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:01.864 09:22:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.864 09:22:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.864 09:22:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.864 09:22:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:01.864 09:22:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:01.864 09:22:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.864 09:22:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.864 09:22:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.864 09:22:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:01.864 09:22:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.864 09:22:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.864 [2024-07-14 09:22:46.109770] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:01.864 09:22:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.864 09:22:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:01.864 09:22:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.864 09:22:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.864 09:22:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.864 09:22:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:01.864 09:22:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.864 09:22:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.864 09:22:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.864 09:22:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:02.430 09:22:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:02.430 09:22:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:02.430 09:22:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:02.430 09:22:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:02.430 09:22:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:04.328 09:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:04.328 09:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:04.328 09:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:04.328 09:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:04.328 09:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:04.328 09:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:04.328 09:22:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:04.586 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.586 09:22:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:04.586 09:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:04.586 09:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:04.586 09:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:04.586 09:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:04.586 09:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:04.586 09:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:04.586 09:22:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:04.586 09:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.586 09:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.586 09:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.586 09:22:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:04.586 09:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.586 09:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.586 09:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.586 09:22:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:04.586 09:22:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:04.586 09:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.586 09:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.586 09:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.586 09:22:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.586 09:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.586 09:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.586 [2024-07-14 09:22:48.887951] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.587 09:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.587 09:22:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:04.587 09:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.587 09:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.587 09:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.587 09:22:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:04.587 09:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.587 09:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.587 09:22:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.587 09:22:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:05.153 09:22:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:05.153 09:22:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:05.153 09:22:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:05.153 09:22:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:05.153 09:22:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:07.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.679 [2024-07-14 09:22:51.670815] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.679 [2024-07-14 09:22:51.718906] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.679 [2024-07-14 09:22:51.767058] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:07.679 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.680 [2024-07-14 09:22:51.815252] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.680 [2024-07-14 09:22:51.863410] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:07.680 "tick_rate": 2700000000, 00:13:07.680 "poll_groups": [ 00:13:07.680 { 00:13:07.680 "name": "nvmf_tgt_poll_group_000", 00:13:07.680 "admin_qpairs": 2, 00:13:07.680 "io_qpairs": 84, 00:13:07.680 "current_admin_qpairs": 0, 00:13:07.680 "current_io_qpairs": 0, 00:13:07.680 "pending_bdev_io": 0, 00:13:07.680 "completed_nvme_io": 146, 00:13:07.680 "transports": [ 00:13:07.680 { 00:13:07.680 "trtype": "TCP" 00:13:07.680 } 00:13:07.680 ] 00:13:07.680 }, 00:13:07.680 { 00:13:07.680 "name": "nvmf_tgt_poll_group_001", 00:13:07.680 "admin_qpairs": 2, 00:13:07.680 "io_qpairs": 84, 00:13:07.680 "current_admin_qpairs": 0, 00:13:07.680 "current_io_qpairs": 0, 00:13:07.680 "pending_bdev_io": 0, 00:13:07.680 "completed_nvme_io": 231, 00:13:07.680 "transports": [ 00:13:07.680 { 00:13:07.680 "trtype": "TCP" 00:13:07.680 } 00:13:07.680 ] 00:13:07.680 }, 00:13:07.680 { 00:13:07.680 "name": "nvmf_tgt_poll_group_002", 00:13:07.680 "admin_qpairs": 1, 00:13:07.680 "io_qpairs": 84, 00:13:07.680 "current_admin_qpairs": 0, 00:13:07.680 "current_io_qpairs": 0, 00:13:07.680 "pending_bdev_io": 0, 00:13:07.680 "completed_nvme_io": 135, 00:13:07.680 "transports": [ 00:13:07.680 { 00:13:07.680 "trtype": "TCP" 00:13:07.680 } 00:13:07.680 ] 00:13:07.680 }, 00:13:07.680 { 00:13:07.680 "name": "nvmf_tgt_poll_group_003", 00:13:07.680 "admin_qpairs": 2, 00:13:07.680 "io_qpairs": 84, 00:13:07.680 "current_admin_qpairs": 0, 00:13:07.680 "current_io_qpairs": 0, 00:13:07.680 "pending_bdev_io": 0, 00:13:07.680 "completed_nvme_io": 174, 00:13:07.680 "transports": [ 00:13:07.680 { 00:13:07.680 "trtype": "TCP" 00:13:07.680 } 00:13:07.680 ] 00:13:07.680 } 00:13:07.680 ] 00:13:07.680 }' 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:07.680 09:22:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:07.680 rmmod nvme_tcp 00:13:07.680 rmmod nvme_fabrics 00:13:07.680 rmmod nvme_keyring 00:13:07.680 09:22:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:07.680 09:22:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:13:07.680 09:22:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:13:07.680 09:22:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 670665 ']' 00:13:07.680 09:22:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 670665 00:13:07.680 09:22:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 670665 ']' 00:13:07.680 09:22:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 670665 00:13:07.680 09:22:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:13:07.680 09:22:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:07.680 09:22:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 670665 00:13:07.680 09:22:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:07.680 09:22:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:07.680 09:22:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 670665' 00:13:07.680 killing process with pid 670665 00:13:07.680 09:22:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 670665 00:13:07.680 09:22:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 670665 00:13:07.939 09:22:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:07.939 09:22:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:07.939 09:22:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:07.939 09:22:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:07.939 09:22:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:07.940 09:22:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.940 09:22:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:07.940 09:22:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:10.493 09:22:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:10.493 00:13:10.493 real 0m25.234s 00:13:10.493 user 1m22.444s 00:13:10.493 sys 0m3.981s 00:13:10.493 09:22:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:10.493 09:22:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.493 ************************************ 00:13:10.493 END TEST nvmf_rpc 00:13:10.493 ************************************ 00:13:10.493 09:22:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:10.493 09:22:54 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:10.493 09:22:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:10.493 09:22:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:10.493 09:22:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:10.493 ************************************ 00:13:10.493 START TEST nvmf_invalid 00:13:10.493 ************************************ 00:13:10.493 09:22:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:10.493 * Looking for test storage... 00:13:10.493 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:10.493 09:22:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:10.493 09:22:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:10.493 09:22:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:10.493 09:22:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:10.493 09:22:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:10.494 09:22:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:10.494 09:22:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:10.494 09:22:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:10.494 09:22:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:10.494 09:22:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:10.494 09:22:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:10.494 09:22:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:10.494 09:22:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:10.494 09:22:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:10.494 09:22:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:10.494 09:22:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:10.494 09:22:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:10.494 09:22:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:10.494 09:22:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:10.494 09:22:54 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:10.494 09:22:54 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:10.494 09:22:54 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:10.494 09:22:54 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.494 09:22:54 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.494 09:22:54 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.494 09:22:54 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:10.494 09:22:54 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.494 09:22:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:13:10.494 09:22:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:10.494 09:22:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:10.494 09:22:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:10.494 09:22:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:10.494 09:22:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:10.494 09:22:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:10.494 09:22:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:10.494 09:22:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:10.494 09:22:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:10.494 09:22:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:10.494 09:22:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:10.494 09:22:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:10.494 09:22:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:10.494 09:22:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:10.494 09:22:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:10.494 09:22:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:10.494 09:22:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:10.494 09:22:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:10.494 09:22:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:10.494 09:22:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:10.494 09:22:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:10.494 09:22:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:10.494 09:22:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:10.494 09:22:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:10.494 09:22:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:13:10.494 09:22:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:12.396 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:12.396 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:12.396 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:12.396 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:12.396 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:12.397 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:12.397 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:12.397 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:12.397 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:12.397 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:12.397 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:13:12.397 00:13:12.397 --- 10.0.0.2 ping statistics --- 00:13:12.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.397 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:13:12.397 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:12.397 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:12.397 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:13:12.397 00:13:12.397 --- 10.0.0.1 ping statistics --- 00:13:12.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.397 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:13:12.397 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:12.397 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:13:12.397 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:12.397 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:12.397 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:12.397 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:12.397 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:12.397 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:12.397 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:12.397 09:22:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:12.397 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:12.397 09:22:56 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:12.397 09:22:56 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:12.397 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=675160 00:13:12.397 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:12.397 09:22:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 675160 00:13:12.397 09:22:56 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 675160 ']' 00:13:12.397 09:22:56 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.397 09:22:56 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:12.397 09:22:56 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.397 09:22:56 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:12.397 09:22:56 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:12.397 [2024-07-14 09:22:56.787097] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:13:12.397 [2024-07-14 09:22:56.787211] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:12.397 EAL: No free 2048 kB hugepages reported on node 1 00:13:12.655 [2024-07-14 09:22:56.858056] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:12.655 [2024-07-14 09:22:56.955821] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:12.655 [2024-07-14 09:22:56.955885] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:12.655 [2024-07-14 09:22:56.955902] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:12.655 [2024-07-14 09:22:56.955915] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:12.655 [2024-07-14 09:22:56.955927] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:12.655 [2024-07-14 09:22:56.955994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:12.656 [2024-07-14 09:22:56.956050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:12.656 [2024-07-14 09:22:56.956114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:12.656 [2024-07-14 09:22:56.956116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.656 09:22:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:12.656 09:22:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:13:12.656 09:22:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:12.656 09:22:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:12.656 09:22:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:12.656 09:22:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:12.656 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:12.656 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode7843 00:13:12.913 [2024-07-14 09:22:57.327311] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:12.913 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:12.913 { 00:13:12.913 "nqn": "nqn.2016-06.io.spdk:cnode7843", 00:13:12.913 "tgt_name": "foobar", 00:13:12.913 "method": "nvmf_create_subsystem", 00:13:12.913 "req_id": 1 00:13:12.913 } 00:13:12.913 Got JSON-RPC error response 00:13:12.913 response: 00:13:12.913 { 00:13:12.913 "code": -32603, 00:13:12.913 "message": "Unable to find target foobar" 00:13:12.913 }' 00:13:12.913 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:12.913 { 00:13:12.913 "nqn": "nqn.2016-06.io.spdk:cnode7843", 00:13:12.913 "tgt_name": "foobar", 00:13:12.913 "method": "nvmf_create_subsystem", 00:13:12.913 "req_id": 1 00:13:12.913 } 00:13:12.913 Got JSON-RPC error response 00:13:12.913 response: 00:13:12.913 { 00:13:12.913 "code": -32603, 00:13:12.913 "message": "Unable to find target foobar" 00:13:12.913 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:12.913 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:12.913 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode17993 00:13:13.171 [2024-07-14 09:22:57.572162] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17993: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:13.171 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:13.171 { 00:13:13.171 "nqn": "nqn.2016-06.io.spdk:cnode17993", 00:13:13.171 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:13.171 "method": "nvmf_create_subsystem", 00:13:13.171 "req_id": 1 00:13:13.171 } 00:13:13.171 Got JSON-RPC error response 00:13:13.171 response: 00:13:13.171 { 00:13:13.171 "code": -32602, 00:13:13.171 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:13.171 }' 00:13:13.171 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:13.171 { 00:13:13.171 "nqn": "nqn.2016-06.io.spdk:cnode17993", 00:13:13.171 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:13.171 "method": "nvmf_create_subsystem", 00:13:13.171 "req_id": 1 00:13:13.171 } 00:13:13.171 Got JSON-RPC error response 00:13:13.171 response: 00:13:13.171 { 00:13:13.171 "code": -32602, 00:13:13.171 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:13.171 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:13.171 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:13.171 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode26587 00:13:13.430 [2024-07-14 09:22:57.828999] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26587: invalid model number 'SPDK_Controller' 00:13:13.430 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:13.430 { 00:13:13.430 "nqn": "nqn.2016-06.io.spdk:cnode26587", 00:13:13.430 "model_number": "SPDK_Controller\u001f", 00:13:13.430 "method": "nvmf_create_subsystem", 00:13:13.430 "req_id": 1 00:13:13.430 } 00:13:13.430 Got JSON-RPC error response 00:13:13.430 response: 00:13:13.430 { 00:13:13.430 "code": -32602, 00:13:13.430 "message": "Invalid MN SPDK_Controller\u001f" 00:13:13.430 }' 00:13:13.430 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:13.430 { 00:13:13.430 "nqn": "nqn.2016-06.io.spdk:cnode26587", 00:13:13.430 "model_number": "SPDK_Controller\u001f", 00:13:13.430 "method": "nvmf_create_subsystem", 00:13:13.430 "req_id": 1 00:13:13.430 } 00:13:13.430 Got JSON-RPC error response 00:13:13.430 response: 00:13:13.430 { 00:13:13.430 "code": -32602, 00:13:13.430 "message": "Invalid MN SPDK_Controller\u001f" 00:13:13.430 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:13.430 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:13.430 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:13.430 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:13.430 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:13.430 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:13.430 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:13.430 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.430 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:13.430 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:13.430 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:13.430 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.430 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.430 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:13.430 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:13.430 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:13.430 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.430 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.430 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:13.430 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:13.430 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:13.430 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.430 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.430 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:13.430 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:13.430 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:13.430 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.430 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.430 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:13.430 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:13.430 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:13.430 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.430 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.430 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:13.430 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:13.430 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:13.430 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.430 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.430 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:13.430 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:13.430 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:13.430 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.430 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.430 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:13.689 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:13.689 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:13.689 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.689 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.689 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:13.689 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:13.689 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:13.689 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.689 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.689 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:13.689 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:13.689 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:13.689 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.689 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.689 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:13:13.689 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:13.689 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:13:13.689 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.689 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.689 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:13:13.689 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:13.689 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:13:13.689 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.689 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.689 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:13.689 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:13.689 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:13.689 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.689 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.689 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:13.689 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:13.689 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:13.689 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.689 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.689 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:13.689 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:13.689 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:13.690 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.690 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.690 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:13.690 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:13.690 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:13.690 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.690 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.690 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:13.690 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:13.690 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:13.690 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.690 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.690 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:13.690 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:13.690 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:13.690 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.690 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.690 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:13.690 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:13.690 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:13.690 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.690 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.690 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:13.690 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:13.690 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:13.690 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.690 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.690 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:13.690 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:13.690 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:13.690 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.690 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.690 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ + == \- ]] 00:13:13.690 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '+`BEZT+-a1^rdxCj6%)&{' 00:13:13.690 09:22:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '+`BEZT+-a1^rdxCj6%)&{' nqn.2016-06.io.spdk:cnode6367 00:13:13.949 [2024-07-14 09:22:58.162116] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6367: invalid serial number '+`BEZT+-a1^rdxCj6%)&{' 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:13.949 { 00:13:13.949 "nqn": "nqn.2016-06.io.spdk:cnode6367", 00:13:13.949 "serial_number": "+`BEZT+-a1^rdxCj6%)&{", 00:13:13.949 "method": "nvmf_create_subsystem", 00:13:13.949 "req_id": 1 00:13:13.949 } 00:13:13.949 Got JSON-RPC error response 00:13:13.949 response: 00:13:13.949 { 00:13:13.949 "code": -32602, 00:13:13.949 "message": "Invalid SN +`BEZT+-a1^rdxCj6%)&{" 00:13:13.949 }' 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:13.949 { 00:13:13.949 "nqn": "nqn.2016-06.io.spdk:cnode6367", 00:13:13.949 "serial_number": "+`BEZT+-a1^rdxCj6%)&{", 00:13:13.949 "method": "nvmf_create_subsystem", 00:13:13.949 "req_id": 1 00:13:13.949 } 00:13:13.949 Got JSON-RPC error response 00:13:13.949 response: 00:13:13.949 { 00:13:13.949 "code": -32602, 00:13:13.949 "message": "Invalid SN +`BEZT+-a1^rdxCj6%)&{" 00:13:13.949 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.949 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:13.950 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:13.951 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:13.951 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:13.951 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.951 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ , == \- ]] 00:13:13.951 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo ',,+xREBFfr:BX3ZKC5T^4-uM1J~`;!-T\=${$]:O[' 00:13:13.951 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d ',,+xREBFfr:BX3ZKC5T^4-uM1J~`;!-T\=${$]:O[' nqn.2016-06.io.spdk:cnode23897 00:13:14.209 [2024-07-14 09:22:58.543361] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23897: invalid model number ',,+xREBFfr:BX3ZKC5T^4-uM1J~`;!-T\=${$]:O[' 00:13:14.209 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:14.209 { 00:13:14.209 "nqn": "nqn.2016-06.io.spdk:cnode23897", 00:13:14.209 "model_number": ",,+xREBFfr:BX3ZKC5T^4-uM1J~`;!-T\\=${$]:O[", 00:13:14.209 "method": "nvmf_create_subsystem", 00:13:14.209 "req_id": 1 00:13:14.209 } 00:13:14.209 Got JSON-RPC error response 00:13:14.209 response: 00:13:14.209 { 00:13:14.209 "code": -32602, 00:13:14.209 "message": "Invalid MN ,,+xREBFfr:BX3ZKC5T^4-uM1J~`;!-T\\=${$]:O[" 00:13:14.209 }' 00:13:14.209 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:14.209 { 00:13:14.209 "nqn": "nqn.2016-06.io.spdk:cnode23897", 00:13:14.209 "model_number": ",,+xREBFfr:BX3ZKC5T^4-uM1J~`;!-T\\=${$]:O[", 00:13:14.209 "method": "nvmf_create_subsystem", 00:13:14.209 "req_id": 1 00:13:14.209 } 00:13:14.209 Got JSON-RPC error response 00:13:14.209 response: 00:13:14.209 { 00:13:14.209 "code": -32602, 00:13:14.209 "message": "Invalid MN ,,+xREBFfr:BX3ZKC5T^4-uM1J~`;!-T\\=${$]:O[" 00:13:14.209 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:14.209 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:14.467 [2024-07-14 09:22:58.780217] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:14.467 09:22:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:14.724 09:22:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:14.724 09:22:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:14.724 09:22:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:14.724 09:22:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:14.724 09:22:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:14.982 [2024-07-14 09:22:59.285839] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:14.982 09:22:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:14.982 { 00:13:14.982 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:14.982 "listen_address": { 00:13:14.982 "trtype": "tcp", 00:13:14.982 "traddr": "", 00:13:14.982 "trsvcid": "4421" 00:13:14.982 }, 00:13:14.982 "method": "nvmf_subsystem_remove_listener", 00:13:14.982 "req_id": 1 00:13:14.982 } 00:13:14.982 Got JSON-RPC error response 00:13:14.982 response: 00:13:14.982 { 00:13:14.982 "code": -32602, 00:13:14.982 "message": "Invalid parameters" 00:13:14.982 }' 00:13:14.982 09:22:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:14.982 { 00:13:14.982 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:14.982 "listen_address": { 00:13:14.982 "trtype": "tcp", 00:13:14.982 "traddr": "", 00:13:14.982 "trsvcid": "4421" 00:13:14.982 }, 00:13:14.982 "method": "nvmf_subsystem_remove_listener", 00:13:14.982 "req_id": 1 00:13:14.982 } 00:13:14.982 Got JSON-RPC error response 00:13:14.982 response: 00:13:14.982 { 00:13:14.982 "code": -32602, 00:13:14.982 "message": "Invalid parameters" 00:13:14.982 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:14.982 09:22:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7419 -i 0 00:13:15.239 [2024-07-14 09:22:59.534644] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7419: invalid cntlid range [0-65519] 00:13:15.239 09:22:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:15.239 { 00:13:15.239 "nqn": "nqn.2016-06.io.spdk:cnode7419", 00:13:15.239 "min_cntlid": 0, 00:13:15.239 "method": "nvmf_create_subsystem", 00:13:15.239 "req_id": 1 00:13:15.239 } 00:13:15.239 Got JSON-RPC error response 00:13:15.239 response: 00:13:15.239 { 00:13:15.239 "code": -32602, 00:13:15.239 "message": "Invalid cntlid range [0-65519]" 00:13:15.239 }' 00:13:15.239 09:22:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:15.239 { 00:13:15.239 "nqn": "nqn.2016-06.io.spdk:cnode7419", 00:13:15.239 "min_cntlid": 0, 00:13:15.239 "method": "nvmf_create_subsystem", 00:13:15.239 "req_id": 1 00:13:15.239 } 00:13:15.239 Got JSON-RPC error response 00:13:15.239 response: 00:13:15.239 { 00:13:15.239 "code": -32602, 00:13:15.239 "message": "Invalid cntlid range [0-65519]" 00:13:15.239 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:15.239 09:22:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14134 -i 65520 00:13:15.497 [2024-07-14 09:22:59.827638] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14134: invalid cntlid range [65520-65519] 00:13:15.497 09:22:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:15.497 { 00:13:15.497 "nqn": "nqn.2016-06.io.spdk:cnode14134", 00:13:15.497 "min_cntlid": 65520, 00:13:15.497 "method": "nvmf_create_subsystem", 00:13:15.497 "req_id": 1 00:13:15.497 } 00:13:15.497 Got JSON-RPC error response 00:13:15.497 response: 00:13:15.497 { 00:13:15.497 "code": -32602, 00:13:15.497 "message": "Invalid cntlid range [65520-65519]" 00:13:15.497 }' 00:13:15.497 09:22:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:15.497 { 00:13:15.497 "nqn": "nqn.2016-06.io.spdk:cnode14134", 00:13:15.497 "min_cntlid": 65520, 00:13:15.497 "method": "nvmf_create_subsystem", 00:13:15.497 "req_id": 1 00:13:15.497 } 00:13:15.497 Got JSON-RPC error response 00:13:15.497 response: 00:13:15.497 { 00:13:15.497 "code": -32602, 00:13:15.497 "message": "Invalid cntlid range [65520-65519]" 00:13:15.497 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:15.497 09:22:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3704 -I 0 00:13:15.754 [2024-07-14 09:23:00.084536] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3704: invalid cntlid range [1-0] 00:13:15.754 09:23:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:15.754 { 00:13:15.754 "nqn": "nqn.2016-06.io.spdk:cnode3704", 00:13:15.754 "max_cntlid": 0, 00:13:15.754 "method": "nvmf_create_subsystem", 00:13:15.754 "req_id": 1 00:13:15.754 } 00:13:15.754 Got JSON-RPC error response 00:13:15.754 response: 00:13:15.754 { 00:13:15.754 "code": -32602, 00:13:15.754 "message": "Invalid cntlid range [1-0]" 00:13:15.754 }' 00:13:15.754 09:23:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:15.754 { 00:13:15.754 "nqn": "nqn.2016-06.io.spdk:cnode3704", 00:13:15.754 "max_cntlid": 0, 00:13:15.754 "method": "nvmf_create_subsystem", 00:13:15.754 "req_id": 1 00:13:15.754 } 00:13:15.754 Got JSON-RPC error response 00:13:15.754 response: 00:13:15.754 { 00:13:15.754 "code": -32602, 00:13:15.754 "message": "Invalid cntlid range [1-0]" 00:13:15.754 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:15.754 09:23:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16592 -I 65520 00:13:16.011 [2024-07-14 09:23:00.345349] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16592: invalid cntlid range [1-65520] 00:13:16.011 09:23:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:16.011 { 00:13:16.011 "nqn": "nqn.2016-06.io.spdk:cnode16592", 00:13:16.011 "max_cntlid": 65520, 00:13:16.011 "method": "nvmf_create_subsystem", 00:13:16.011 "req_id": 1 00:13:16.011 } 00:13:16.011 Got JSON-RPC error response 00:13:16.011 response: 00:13:16.011 { 00:13:16.011 "code": -32602, 00:13:16.011 "message": "Invalid cntlid range [1-65520]" 00:13:16.011 }' 00:13:16.011 09:23:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:16.011 { 00:13:16.011 "nqn": "nqn.2016-06.io.spdk:cnode16592", 00:13:16.011 "max_cntlid": 65520, 00:13:16.012 "method": "nvmf_create_subsystem", 00:13:16.012 "req_id": 1 00:13:16.012 } 00:13:16.012 Got JSON-RPC error response 00:13:16.012 response: 00:13:16.012 { 00:13:16.012 "code": -32602, 00:13:16.012 "message": "Invalid cntlid range [1-65520]" 00:13:16.012 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:16.012 09:23:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8578 -i 6 -I 5 00:13:16.268 [2024-07-14 09:23:00.586143] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8578: invalid cntlid range [6-5] 00:13:16.268 09:23:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:16.268 { 00:13:16.268 "nqn": "nqn.2016-06.io.spdk:cnode8578", 00:13:16.268 "min_cntlid": 6, 00:13:16.268 "max_cntlid": 5, 00:13:16.268 "method": "nvmf_create_subsystem", 00:13:16.268 "req_id": 1 00:13:16.268 } 00:13:16.268 Got JSON-RPC error response 00:13:16.268 response: 00:13:16.268 { 00:13:16.268 "code": -32602, 00:13:16.268 "message": "Invalid cntlid range [6-5]" 00:13:16.268 }' 00:13:16.268 09:23:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:16.268 { 00:13:16.268 "nqn": "nqn.2016-06.io.spdk:cnode8578", 00:13:16.268 "min_cntlid": 6, 00:13:16.268 "max_cntlid": 5, 00:13:16.268 "method": "nvmf_create_subsystem", 00:13:16.268 "req_id": 1 00:13:16.268 } 00:13:16.268 Got JSON-RPC error response 00:13:16.268 response: 00:13:16.268 { 00:13:16.268 "code": -32602, 00:13:16.268 "message": "Invalid cntlid range [6-5]" 00:13:16.268 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:16.268 09:23:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:16.560 09:23:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:16.560 { 00:13:16.560 "name": "foobar", 00:13:16.560 "method": "nvmf_delete_target", 00:13:16.560 "req_id": 1 00:13:16.560 } 00:13:16.560 Got JSON-RPC error response 00:13:16.560 response: 00:13:16.560 { 00:13:16.560 "code": -32602, 00:13:16.560 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:16.560 }' 00:13:16.560 09:23:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:16.560 { 00:13:16.560 "name": "foobar", 00:13:16.560 "method": "nvmf_delete_target", 00:13:16.560 "req_id": 1 00:13:16.560 } 00:13:16.560 Got JSON-RPC error response 00:13:16.560 response: 00:13:16.560 { 00:13:16.560 "code": -32602, 00:13:16.560 "message": "The specified target doesn't exist, cannot delete it." 00:13:16.560 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:16.560 09:23:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:16.560 09:23:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:16.560 09:23:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:16.560 09:23:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:13:16.560 09:23:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:16.560 09:23:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:13:16.560 09:23:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:16.560 09:23:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:16.560 rmmod nvme_tcp 00:13:16.560 rmmod nvme_fabrics 00:13:16.560 rmmod nvme_keyring 00:13:16.560 09:23:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:16.560 09:23:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:13:16.560 09:23:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:13:16.560 09:23:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 675160 ']' 00:13:16.560 09:23:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 675160 00:13:16.560 09:23:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 675160 ']' 00:13:16.560 09:23:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 675160 00:13:16.560 09:23:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:13:16.560 09:23:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:16.560 09:23:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 675160 00:13:16.560 09:23:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:16.560 09:23:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:16.560 09:23:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 675160' 00:13:16.560 killing process with pid 675160 00:13:16.560 09:23:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 675160 00:13:16.560 09:23:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 675160 00:13:16.817 09:23:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:16.817 09:23:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:16.817 09:23:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:16.817 09:23:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:16.817 09:23:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:16.817 09:23:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:16.817 09:23:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:16.817 09:23:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.720 09:23:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:18.720 00:13:18.720 real 0m8.690s 00:13:18.720 user 0m20.032s 00:13:18.720 sys 0m2.434s 00:13:18.720 09:23:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:18.720 09:23:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:18.720 ************************************ 00:13:18.720 END TEST nvmf_invalid 00:13:18.720 ************************************ 00:13:18.720 09:23:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:18.720 09:23:03 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:18.720 09:23:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:18.720 09:23:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:18.720 09:23:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:18.720 ************************************ 00:13:18.720 START TEST nvmf_abort 00:13:18.720 ************************************ 00:13:18.720 09:23:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:18.994 * Looking for test storage... 00:13:18.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:13:18.994 09:23:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:20.899 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:20.899 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:13:20.899 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:20.899 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:20.899 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:20.899 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:20.899 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:20.899 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:13:20.899 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:20.899 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:13:20.899 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:13:20.899 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:13:20.899 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:13:20.899 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:13:20.899 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:13:20.899 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:20.899 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:20.899 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:20.899 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:20.899 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:20.899 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:20.899 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:20.899 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:20.899 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:20.899 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:20.899 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:20.899 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:20.899 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:20.899 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:20.899 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:20.899 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:20.899 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:20.899 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:20.899 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:20.899 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:20.899 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:20.899 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:20.899 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:20.899 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:20.900 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:20.900 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:20.900 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:20.900 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:20.900 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:13:20.900 00:13:20.900 --- 10.0.0.2 ping statistics --- 00:13:20.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.900 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:20.900 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:20.900 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:13:20.900 00:13:20.900 --- 10.0.0.1 ping statistics --- 00:13:20.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.900 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=677786 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 677786 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 677786 ']' 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:20.900 09:23:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:21.160 [2024-07-14 09:23:05.393341] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:13:21.160 [2024-07-14 09:23:05.393413] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:21.160 EAL: No free 2048 kB hugepages reported on node 1 00:13:21.160 [2024-07-14 09:23:05.455981] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:21.160 [2024-07-14 09:23:05.541813] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:21.160 [2024-07-14 09:23:05.541871] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:21.160 [2024-07-14 09:23:05.541902] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:21.160 [2024-07-14 09:23:05.541913] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:21.160 [2024-07-14 09:23:05.541922] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:21.160 [2024-07-14 09:23:05.542015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:21.160 [2024-07-14 09:23:05.542080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:21.160 [2024-07-14 09:23:05.542083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:21.417 09:23:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:21.417 09:23:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:13:21.417 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:21.417 09:23:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:21.417 09:23:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:21.417 09:23:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:21.417 09:23:05 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:21.417 09:23:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.417 09:23:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:21.417 [2024-07-14 09:23:05.677535] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:21.417 09:23:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.417 09:23:05 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:21.417 09:23:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.417 09:23:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:21.417 Malloc0 00:13:21.417 09:23:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.417 09:23:05 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:21.417 09:23:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.417 09:23:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:21.417 Delay0 00:13:21.417 09:23:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.417 09:23:05 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:21.417 09:23:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.417 09:23:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:21.417 09:23:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.417 09:23:05 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:21.417 09:23:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.417 09:23:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:21.417 09:23:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.417 09:23:05 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:21.417 09:23:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.417 09:23:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:21.417 [2024-07-14 09:23:05.756188] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:21.417 09:23:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.417 09:23:05 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:21.417 09:23:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.417 09:23:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:21.417 09:23:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.417 09:23:05 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:21.417 EAL: No free 2048 kB hugepages reported on node 1 00:13:21.674 [2024-07-14 09:23:05.893040] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:23.567 Initializing NVMe Controllers 00:13:23.567 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:23.567 controller IO queue size 128 less than required 00:13:23.567 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:23.567 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:23.567 Initialization complete. Launching workers. 00:13:23.567 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 32829 00:13:23.567 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 32890, failed to submit 62 00:13:23.567 success 32833, unsuccess 57, failed 0 00:13:23.567 09:23:07 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:23.567 09:23:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.567 09:23:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:23.567 09:23:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.567 09:23:07 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:23.567 09:23:07 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:13:23.567 09:23:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:23.567 09:23:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:13:23.567 09:23:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:23.567 09:23:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:13:23.567 09:23:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:23.567 09:23:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:23.567 rmmod nvme_tcp 00:13:23.567 rmmod nvme_fabrics 00:13:23.825 rmmod nvme_keyring 00:13:23.825 09:23:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:23.825 09:23:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:13:23.825 09:23:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:13:23.825 09:23:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 677786 ']' 00:13:23.825 09:23:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 677786 00:13:23.825 09:23:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 677786 ']' 00:13:23.825 09:23:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 677786 00:13:23.825 09:23:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:13:23.825 09:23:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:23.825 09:23:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 677786 00:13:23.825 09:23:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:23.825 09:23:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:23.825 09:23:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 677786' 00:13:23.825 killing process with pid 677786 00:13:23.825 09:23:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 677786 00:13:23.825 09:23:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 677786 00:13:24.084 09:23:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:24.084 09:23:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:24.084 09:23:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:24.084 09:23:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:24.084 09:23:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:24.084 09:23:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:24.084 09:23:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:24.084 09:23:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.986 09:23:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:25.986 00:13:25.986 real 0m7.183s 00:13:25.986 user 0m9.982s 00:13:25.986 sys 0m2.712s 00:13:25.986 09:23:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:25.986 09:23:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:25.986 ************************************ 00:13:25.986 END TEST nvmf_abort 00:13:25.986 ************************************ 00:13:25.986 09:23:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:25.986 09:23:10 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:25.986 09:23:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:25.986 09:23:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:25.986 09:23:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:25.986 ************************************ 00:13:25.986 START TEST nvmf_ns_hotplug_stress 00:13:25.986 ************************************ 00:13:25.986 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:26.244 * Looking for test storage... 00:13:26.244 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:26.244 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:26.244 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:13:26.244 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:26.244 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:26.244 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:26.244 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:26.245 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:26.245 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:26.245 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:26.245 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:26.245 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:26.245 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:26.245 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:26.245 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:26.245 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:26.245 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:26.245 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:26.245 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:26.245 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:26.245 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:26.245 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:26.245 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:26.245 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.245 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.245 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.245 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:13:26.245 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.245 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:13:26.245 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:26.245 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:26.245 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:26.245 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:26.245 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:26.245 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:26.245 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:26.245 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:26.245 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:26.245 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:26.245 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:26.245 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:26.245 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:26.245 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:26.245 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:26.245 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.245 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:26.245 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.245 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:26.245 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:26.245 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:26.245 09:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:28.149 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:28.149 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:28.149 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:28.149 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:28.149 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:28.408 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:28.408 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:28.408 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:28.408 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:28.408 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:13:28.408 00:13:28.408 --- 10.0.0.2 ping statistics --- 00:13:28.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.408 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:13:28.409 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:28.409 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:28.409 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:13:28.409 00:13:28.409 --- 10.0.0.1 ping statistics --- 00:13:28.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.409 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:13:28.409 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:28.409 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:13:28.409 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:28.409 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:28.409 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:28.409 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:28.409 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:28.409 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:28.409 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:28.409 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:28.409 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:28.409 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:28.409 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.409 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=680074 00:13:28.409 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:28.409 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 680074 00:13:28.409 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 680074 ']' 00:13:28.409 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.409 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:28.409 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.409 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:28.409 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.409 [2024-07-14 09:23:12.713547] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:13:28.409 [2024-07-14 09:23:12.713637] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:28.409 EAL: No free 2048 kB hugepages reported on node 1 00:13:28.409 [2024-07-14 09:23:12.779057] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:28.667 [2024-07-14 09:23:12.869306] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:28.667 [2024-07-14 09:23:12.869361] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:28.667 [2024-07-14 09:23:12.869390] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:28.667 [2024-07-14 09:23:12.869401] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:28.667 [2024-07-14 09:23:12.869411] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:28.667 [2024-07-14 09:23:12.869564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:28.667 [2024-07-14 09:23:12.869630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:28.667 [2024-07-14 09:23:12.869633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.667 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:28.667 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:13:28.667 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:28.667 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:28.667 09:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.667 09:23:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:28.667 09:23:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:28.668 09:23:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:28.924 [2024-07-14 09:23:13.288272] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:28.924 09:23:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:29.180 09:23:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:29.437 [2024-07-14 09:23:13.867164] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:29.437 09:23:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:29.694 09:23:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:29.952 Malloc0 00:13:29.952 09:23:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:30.210 Delay0 00:13:30.210 09:23:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:30.468 09:23:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:30.725 NULL1 00:13:30.725 09:23:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:30.983 09:23:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=680422 00:13:30.983 09:23:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:30.983 09:23:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680422 00:13:30.983 09:23:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.983 EAL: No free 2048 kB hugepages reported on node 1 00:13:32.354 Read completed with error (sct=0, sc=11) 00:13:32.354 09:23:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:32.354 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:32.354 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:32.354 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:32.354 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:32.354 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:32.354 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:32.652 09:23:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:32.652 09:23:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:32.652 true 00:13:32.652 09:23:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680422 00:13:32.652 09:23:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.603 09:23:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:33.860 09:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:33.860 09:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:33.860 true 00:13:34.118 09:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680422 00:13:34.118 09:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.376 09:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.376 09:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:34.376 09:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:34.634 true 00:13:34.634 09:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680422 00:13:34.634 09:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.567 09:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:35.824 09:23:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:35.824 09:23:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:36.082 true 00:13:36.082 09:23:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680422 00:13:36.082 09:23:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.340 09:23:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:36.597 09:23:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:36.597 09:23:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:36.855 true 00:13:36.855 09:23:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680422 00:13:36.855 09:23:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.111 09:23:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:37.369 09:23:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:37.369 09:23:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:37.369 true 00:13:37.626 09:23:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680422 00:13:37.626 09:23:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:38.559 09:23:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:38.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:38.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:38.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:38.817 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:38.817 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:38.817 09:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:38.817 09:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:39.075 true 00:13:39.075 09:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680422 00:13:39.075 09:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.007 09:23:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:40.007 09:23:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:40.007 09:23:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:40.264 true 00:13:40.264 09:23:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680422 00:13:40.264 09:23:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.522 09:23:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:40.779 09:23:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:40.779 09:23:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:41.036 true 00:13:41.036 09:23:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680422 00:13:41.036 09:23:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.969 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:41.969 09:23:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.969 09:23:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:41.969 09:23:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:42.226 true 00:13:42.226 09:23:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680422 00:13:42.226 09:23:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.483 09:23:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:42.741 09:23:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:42.741 09:23:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:42.998 true 00:13:42.998 09:23:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680422 00:13:42.998 09:23:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.930 09:23:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:44.187 09:23:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:44.187 09:23:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:44.445 true 00:13:44.445 09:23:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680422 00:13:44.445 09:23:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.702 09:23:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:44.960 09:23:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:44.960 09:23:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:45.217 true 00:13:45.217 09:23:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680422 00:13:45.217 09:23:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.475 09:23:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:45.732 09:23:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:45.732 09:23:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:45.988 true 00:13:45.988 09:23:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680422 00:13:45.988 09:23:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:46.949 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:46.949 09:23:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.207 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:47.207 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:47.207 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:47.207 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:47.207 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:47.207 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:47.207 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:47.465 09:23:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:47.465 09:23:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:47.722 true 00:13:47.722 09:23:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680422 00:13:47.722 09:23:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.287 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:48.287 09:23:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.544 09:23:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:48.544 09:23:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:48.802 true 00:13:48.802 09:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680422 00:13:48.802 09:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.059 09:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:49.315 09:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:49.315 09:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:49.571 true 00:13:49.571 09:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680422 00:13:49.571 09:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:50.505 09:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:50.762 09:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:50.762 09:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:51.019 true 00:13:51.019 09:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680422 00:13:51.019 09:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.277 09:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:51.535 09:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:51.535 09:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:51.792 true 00:13:51.792 09:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680422 00:13:51.792 09:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.724 09:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.724 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:52.982 09:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:52.982 09:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:53.240 true 00:13:53.240 09:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680422 00:13:53.240 09:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.497 09:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:53.497 09:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:53.497 09:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:53.755 true 00:13:53.755 09:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680422 00:13:53.755 09:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.687 09:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:54.945 09:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:54.945 09:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:55.203 true 00:13:55.203 09:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680422 00:13:55.203 09:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.461 09:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:55.719 09:23:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:55.719 09:23:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:55.977 true 00:13:55.977 09:23:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680422 00:13:55.977 09:23:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.911 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:56.911 09:23:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:56.911 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:56.911 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:56.911 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:57.168 09:23:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:57.168 09:23:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:57.426 true 00:13:57.426 09:23:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680422 00:13:57.426 09:23:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.684 09:23:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:57.940 09:23:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:57.940 09:23:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:58.196 true 00:13:58.196 09:23:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680422 00:13:58.196 09:23:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.128 09:23:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:59.128 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:59.385 09:23:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:59.385 09:23:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:59.642 true 00:13:59.642 09:23:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680422 00:13:59.642 09:23:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.941 09:23:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:00.198 09:23:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:14:00.198 09:23:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:14:00.456 true 00:14:00.456 09:23:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680422 00:14:00.456 09:23:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:01.389 09:23:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:01.389 Initializing NVMe Controllers 00:14:01.389 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:01.389 Controller IO queue size 128, less than required. 00:14:01.389 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:01.389 Controller IO queue size 128, less than required. 00:14:01.389 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:01.389 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:01.389 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:01.389 Initialization complete. Launching workers. 00:14:01.389 ======================================================== 00:14:01.389 Latency(us) 00:14:01.389 Device Information : IOPS MiB/s Average min max 00:14:01.389 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1086.60 0.53 66682.51 2779.53 1087334.17 00:14:01.389 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11762.58 5.74 10882.85 2906.21 367585.52 00:14:01.389 ======================================================== 00:14:01.389 Total : 12849.17 6.27 15601.58 2779.53 1087334.17 00:14:01.389 00:14:01.389 09:23:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:14:01.389 09:23:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:14:01.646 true 00:14:01.646 09:23:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680422 00:14:01.646 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (680422) - No such process 00:14:01.646 09:23:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 680422 00:14:01.646 09:23:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:01.903 09:23:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:02.159 09:23:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:14:02.159 09:23:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:14:02.159 09:23:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:14:02.159 09:23:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:02.159 09:23:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:14:02.415 null0 00:14:02.415 09:23:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:02.415 09:23:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:02.415 09:23:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:14:02.672 null1 00:14:02.672 09:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:02.672 09:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:02.672 09:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:14:02.929 null2 00:14:02.929 09:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:02.929 09:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:02.929 09:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:14:03.186 null3 00:14:03.186 09:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:03.186 09:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:03.186 09:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:14:03.444 null4 00:14:03.444 09:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:03.444 09:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:03.444 09:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:14:03.699 null5 00:14:03.699 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:03.699 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:03.699 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:14:03.955 null6 00:14:03.955 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:03.955 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:03.955 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:14:04.213 null7 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:04.471 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:14:04.472 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:04.472 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 684474 684475 684477 684479 684481 684483 684485 684487 00:14:04.472 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.472 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:04.729 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:04.729 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:04.729 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.729 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:04.729 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:04.729 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:04.729 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:04.729 09:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:04.987 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.987 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.987 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:04.987 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.987 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.987 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:04.987 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.987 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.987 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:04.987 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.987 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.987 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:04.987 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.987 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.987 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:04.987 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.987 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.987 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.987 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.987 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:04.987 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:04.987 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.987 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.987 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:05.245 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:05.245 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:05.245 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.245 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:05.245 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:05.245 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:05.245 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:05.245 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:05.503 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.503 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.503 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:05.503 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.503 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.503 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:05.503 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.503 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.503 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:05.503 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.503 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.503 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:05.503 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.503 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.503 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:05.503 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.503 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.503 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:05.503 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.503 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.503 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:05.503 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.503 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.503 09:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:05.761 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:05.761 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:05.761 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.761 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:05.761 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:05.761 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:05.761 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:05.761 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:06.018 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.018 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.018 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:06.018 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.018 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.018 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:06.018 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.018 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.018 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:06.018 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.018 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.019 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:06.019 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.019 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.019 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.019 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.019 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:06.019 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:06.019 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.019 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.019 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:06.019 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.019 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.019 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:06.276 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:06.276 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.276 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:06.276 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:06.276 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:06.276 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:06.276 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:06.276 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:06.533 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.533 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.533 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:06.533 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.533 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.533 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:06.533 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.533 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.533 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:06.533 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.533 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.533 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:06.533 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.533 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.534 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:06.534 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.534 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.534 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:06.534 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.534 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.534 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:06.534 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.534 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.534 09:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:06.791 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:06.791 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.791 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:06.791 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:06.791 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:06.791 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:06.791 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:06.791 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:07.049 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.049 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.049 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:07.049 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.049 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.049 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:07.049 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.049 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.049 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:07.049 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.049 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.049 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:07.049 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.049 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.049 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.049 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:07.049 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.049 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:07.049 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.049 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.049 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:07.049 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.049 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.049 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:07.306 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:07.306 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:07.306 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:07.306 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.306 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:07.306 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:07.306 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:07.306 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:07.564 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.564 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.564 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:07.564 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.564 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.564 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:07.564 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.564 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.564 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:07.564 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.564 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.564 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:07.564 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.564 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.564 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:07.564 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.564 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.564 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:07.564 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.564 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.564 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:07.564 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:07.564 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:07.564 09:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:07.821 09:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:07.821 09:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:07.821 09:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:07.821 09:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.821 09:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:07.821 09:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:07.822 09:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:07.822 09:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:08.079 09:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.079 09:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.079 09:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.079 09:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.079 09:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:08.079 09:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:08.079 09:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.079 09:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.079 09:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:08.079 09:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.079 09:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.079 09:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:08.079 09:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.079 09:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.079 09:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:08.079 09:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.079 09:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.079 09:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:08.079 09:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.079 09:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.079 09:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:08.079 09:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.079 09:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.079 09:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:08.342 09:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:08.342 09:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:08.342 09:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.342 09:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:08.342 09:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:08.342 09:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:08.342 09:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:08.600 09:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:08.600 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.600 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.600 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:08.600 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.600 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.600 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:08.600 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.600 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.600 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:08.600 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.600 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.858 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:08.858 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.858 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.858 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:08.858 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.858 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.858 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:08.858 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.858 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.858 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:08.858 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:08.858 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:08.858 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:09.116 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:09.116 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:09.116 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.116 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:09.116 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:09.116 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:09.116 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:09.116 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:09.423 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.423 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.423 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:09.423 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.423 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.424 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:09.424 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.424 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.424 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:09.424 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.424 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.424 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:09.424 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.424 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.424 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:09.424 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.424 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.424 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.424 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:09.424 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.424 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:09.424 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.424 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.424 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:09.424 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:09.424 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:09.424 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:09.682 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:09.682 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:09.682 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.682 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:09.682 09:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:09.682 09:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.682 09:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.682 09:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.682 09:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.940 09:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.940 09:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.940 09:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.940 09:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.940 09:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.940 09:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.940 09:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.940 09:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.940 09:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.940 09:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.940 09:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:09.940 09:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:09.940 09:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:09.940 09:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:14:09.940 09:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:09.940 09:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:14:09.940 09:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:09.940 09:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:14:09.940 09:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:09.940 09:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:09.940 rmmod nvme_tcp 00:14:09.940 rmmod nvme_fabrics 00:14:09.940 rmmod nvme_keyring 00:14:09.940 09:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:09.940 09:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:14:09.940 09:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:14:09.940 09:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 680074 ']' 00:14:09.940 09:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 680074 00:14:09.940 09:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 680074 ']' 00:14:09.940 09:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 680074 00:14:09.940 09:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:14:09.940 09:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:09.940 09:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 680074 00:14:09.940 09:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:09.940 09:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:09.940 09:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 680074' 00:14:09.940 killing process with pid 680074 00:14:09.940 09:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 680074 00:14:09.940 09:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 680074 00:14:10.199 09:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:10.199 09:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:10.199 09:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:10.199 09:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:10.199 09:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:10.199 09:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.199 09:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:10.199 09:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.101 09:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:12.101 00:14:12.101 real 0m46.145s 00:14:12.101 user 3m29.232s 00:14:12.101 sys 0m16.457s 00:14:12.101 09:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:12.101 09:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.101 ************************************ 00:14:12.101 END TEST nvmf_ns_hotplug_stress 00:14:12.101 ************************************ 00:14:12.360 09:23:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:12.360 09:23:56 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:12.360 09:23:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:12.360 09:23:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:12.360 09:23:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:12.360 ************************************ 00:14:12.360 START TEST nvmf_connect_stress 00:14:12.360 ************************************ 00:14:12.360 09:23:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:12.360 * Looking for test storage... 00:14:12.360 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:12.360 09:23:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:12.360 09:23:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:12.360 09:23:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:12.360 09:23:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:12.360 09:23:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:12.360 09:23:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:12.360 09:23:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:12.360 09:23:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:12.360 09:23:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:12.360 09:23:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:12.360 09:23:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:12.360 09:23:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:12.360 09:23:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:12.360 09:23:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:12.360 09:23:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:12.360 09:23:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:12.360 09:23:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:12.361 09:23:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:12.361 09:23:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:12.361 09:23:56 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:12.361 09:23:56 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:12.361 09:23:56 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:12.361 09:23:56 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.361 09:23:56 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.361 09:23:56 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.361 09:23:56 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:12.361 09:23:56 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.361 09:23:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:14:12.361 09:23:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:12.361 09:23:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:12.361 09:23:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:12.361 09:23:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:12.361 09:23:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:12.361 09:23:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:12.361 09:23:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:12.361 09:23:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:12.361 09:23:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:12.361 09:23:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:12.361 09:23:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:12.361 09:23:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:12.361 09:23:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:12.361 09:23:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:12.361 09:23:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.361 09:23:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:12.361 09:23:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.361 09:23:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:12.361 09:23:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:12.361 09:23:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:14:12.361 09:23:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.264 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:14.264 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:14:14.264 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:14.264 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:14.264 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:14.264 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:14.264 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:14.264 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:14:14.264 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:14.264 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:14:14.264 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:14:14.264 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:14:14.264 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:14:14.264 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:14:14.264 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:14:14.264 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:14.264 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:14.264 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:14.264 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:14.264 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:14.264 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:14.264 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:14.264 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:14.264 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:14.264 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:14.264 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:14.264 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:14.264 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:14.264 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:14.264 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:14.264 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:14.264 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:14.264 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:14.264 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:14.264 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:14.264 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:14.265 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:14.265 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:14.265 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:14.265 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:14.524 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:14.524 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:14.524 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:14.524 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:14.524 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:14.524 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:14.524 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:14:14.524 00:14:14.524 --- 10.0.0.2 ping statistics --- 00:14:14.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.524 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:14:14.524 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:14.524 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:14.524 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:14:14.524 00:14:14.524 --- 10.0.0.1 ping statistics --- 00:14:14.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.524 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:14:14.524 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:14.524 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:14:14.524 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:14.524 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:14.524 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:14.524 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:14.524 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:14.524 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:14.524 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:14.524 09:23:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:14.524 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:14.524 09:23:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:14.524 09:23:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.524 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=687233 00:14:14.524 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:14.524 09:23:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 687233 00:14:14.524 09:23:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 687233 ']' 00:14:14.524 09:23:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.524 09:23:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:14.524 09:23:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.524 09:23:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:14.524 09:23:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.524 [2024-07-14 09:23:58.853460] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:14.524 [2024-07-14 09:23:58.853546] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:14.524 EAL: No free 2048 kB hugepages reported on node 1 00:14:14.524 [2024-07-14 09:23:58.921752] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:14.783 [2024-07-14 09:23:59.012022] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:14.783 [2024-07-14 09:23:59.012086] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:14.783 [2024-07-14 09:23:59.012103] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:14.783 [2024-07-14 09:23:59.012117] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:14.783 [2024-07-14 09:23:59.012129] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:14.783 [2024-07-14 09:23:59.012218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:14.783 [2024-07-14 09:23:59.012342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:14.783 [2024-07-14 09:23:59.012345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:14.783 09:23:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:14.783 09:23:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:14:14.783 09:23:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:14.783 09:23:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:14.783 09:23:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.783 09:23:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:14.783 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:14.783 09:23:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.783 09:23:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.783 [2024-07-14 09:23:59.158679] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:14.783 09:23:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.783 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:14.783 09:23:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.783 09:23:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.783 09:23:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.783 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:14.783 09:23:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.783 09:23:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.783 [2024-07-14 09:23:59.189039] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:14.783 09:23:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.783 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:14.783 09:23:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.783 09:23:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.783 NULL1 00:14:14.783 09:23:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.783 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=687261 00:14:14.783 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:14.783 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:14.783 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:14.783 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:14.783 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.783 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.783 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.783 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.783 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.783 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.783 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.784 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.784 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.784 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.784 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.784 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.784 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.784 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.784 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.784 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.784 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.784 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.784 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.784 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.784 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.784 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.784 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.784 EAL: No free 2048 kB hugepages reported on node 1 00:14:14.784 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.784 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.784 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.784 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.784 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:15.042 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:15.042 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:15.042 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:15.042 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:15.042 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:15.042 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:15.042 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:15.042 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:15.042 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:15.042 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:15.042 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:15.042 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:15.042 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687261 00:14:15.042 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.042 09:23:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.042 09:23:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.300 09:23:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.300 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687261 00:14:15.300 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.300 09:23:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.300 09:23:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.558 09:23:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.558 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687261 00:14:15.558 09:23:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.558 09:23:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.558 09:23:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.816 09:24:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.816 09:24:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687261 00:14:15.816 09:24:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.816 09:24:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.816 09:24:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.382 09:24:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.382 09:24:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687261 00:14:16.382 09:24:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.382 09:24:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.382 09:24:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.639 09:24:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.639 09:24:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687261 00:14:16.640 09:24:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.640 09:24:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.640 09:24:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.896 09:24:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.896 09:24:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687261 00:14:16.896 09:24:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.896 09:24:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.896 09:24:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.153 09:24:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.153 09:24:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687261 00:14:17.153 09:24:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.153 09:24:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.153 09:24:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.411 09:24:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.411 09:24:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687261 00:14:17.411 09:24:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.411 09:24:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.411 09:24:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.976 09:24:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.976 09:24:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687261 00:14:17.976 09:24:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.976 09:24:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.976 09:24:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.234 09:24:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.234 09:24:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687261 00:14:18.234 09:24:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.234 09:24:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.234 09:24:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.492 09:24:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.492 09:24:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687261 00:14:18.492 09:24:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.492 09:24:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.492 09:24:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.749 09:24:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.749 09:24:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687261 00:14:18.749 09:24:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.749 09:24:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.749 09:24:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:19.006 09:24:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.006 09:24:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687261 00:14:19.006 09:24:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.006 09:24:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.006 09:24:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:19.571 09:24:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.571 09:24:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687261 00:14:19.571 09:24:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.571 09:24:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.571 09:24:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:19.828 09:24:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.828 09:24:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687261 00:14:19.828 09:24:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.828 09:24:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.828 09:24:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:20.085 09:24:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.085 09:24:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687261 00:14:20.085 09:24:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.085 09:24:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.085 09:24:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:20.343 09:24:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.343 09:24:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687261 00:14:20.343 09:24:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.343 09:24:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.343 09:24:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:20.600 09:24:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.600 09:24:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687261 00:14:20.600 09:24:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.600 09:24:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.600 09:24:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:21.165 09:24:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.165 09:24:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687261 00:14:21.165 09:24:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.165 09:24:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.165 09:24:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:21.423 09:24:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.423 09:24:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687261 00:14:21.423 09:24:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.423 09:24:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.423 09:24:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:21.681 09:24:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.681 09:24:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687261 00:14:21.681 09:24:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.681 09:24:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.681 09:24:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:21.938 09:24:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.938 09:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687261 00:14:21.938 09:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.938 09:24:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.938 09:24:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:22.196 09:24:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.196 09:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687261 00:14:22.196 09:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:22.196 09:24:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.196 09:24:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:22.762 09:24:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.762 09:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687261 00:14:22.762 09:24:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:22.762 09:24:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.762 09:24:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:23.019 09:24:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.019 09:24:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687261 00:14:23.019 09:24:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.020 09:24:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.020 09:24:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:23.277 09:24:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.277 09:24:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687261 00:14:23.277 09:24:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.277 09:24:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.277 09:24:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:23.535 09:24:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.535 09:24:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687261 00:14:23.535 09:24:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.535 09:24:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.535 09:24:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:24.101 09:24:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.101 09:24:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687261 00:14:24.101 09:24:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:24.101 09:24:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.101 09:24:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:24.359 09:24:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.359 09:24:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687261 00:14:24.359 09:24:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:24.359 09:24:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.359 09:24:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:24.616 09:24:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.616 09:24:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687261 00:14:24.616 09:24:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:24.616 09:24:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.616 09:24:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:24.875 09:24:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.875 09:24:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687261 00:14:24.875 09:24:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:24.875 09:24:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.875 09:24:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:24.875 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:25.133 09:24:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.133 09:24:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687261 00:14:25.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (687261) - No such process 00:14:25.133 09:24:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 687261 00:14:25.133 09:24:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:25.133 09:24:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:25.133 09:24:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:25.133 09:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:25.133 09:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:14:25.133 09:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:25.133 09:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:14:25.133 09:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:25.133 09:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:25.133 rmmod nvme_tcp 00:14:25.133 rmmod nvme_fabrics 00:14:25.133 rmmod nvme_keyring 00:14:25.391 09:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:25.391 09:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:14:25.391 09:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:14:25.391 09:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 687233 ']' 00:14:25.391 09:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 687233 00:14:25.392 09:24:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 687233 ']' 00:14:25.392 09:24:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 687233 00:14:25.392 09:24:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:14:25.392 09:24:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:25.392 09:24:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 687233 00:14:25.392 09:24:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:25.392 09:24:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:25.392 09:24:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 687233' 00:14:25.392 killing process with pid 687233 00:14:25.392 09:24:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 687233 00:14:25.392 09:24:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 687233 00:14:25.650 09:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:25.650 09:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:25.650 09:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:25.650 09:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:25.650 09:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:25.650 09:24:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:25.650 09:24:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:25.650 09:24:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:27.555 09:24:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:27.555 00:14:27.555 real 0m15.297s 00:14:27.555 user 0m38.048s 00:14:27.555 sys 0m6.055s 00:14:27.555 09:24:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:27.555 09:24:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:27.555 ************************************ 00:14:27.555 END TEST nvmf_connect_stress 00:14:27.555 ************************************ 00:14:27.555 09:24:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:27.555 09:24:11 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:27.555 09:24:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:27.555 09:24:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:27.555 09:24:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:27.555 ************************************ 00:14:27.555 START TEST nvmf_fused_ordering 00:14:27.555 ************************************ 00:14:27.555 09:24:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:27.555 * Looking for test storage... 00:14:27.555 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:27.555 09:24:11 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:27.555 09:24:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:27.555 09:24:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:27.555 09:24:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:27.555 09:24:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:27.555 09:24:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:27.555 09:24:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:27.555 09:24:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:27.555 09:24:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:27.555 09:24:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:27.555 09:24:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:27.555 09:24:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:27.555 09:24:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:27.555 09:24:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:27.555 09:24:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:27.555 09:24:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:27.555 09:24:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:27.555 09:24:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:27.555 09:24:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:27.555 09:24:12 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:27.555 09:24:12 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:27.555 09:24:12 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:27.555 09:24:12 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.555 09:24:12 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.555 09:24:12 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.555 09:24:12 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:27.555 09:24:12 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.555 09:24:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:14:27.555 09:24:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:27.555 09:24:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:27.555 09:24:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:27.555 09:24:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:27.555 09:24:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:27.555 09:24:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:27.814 09:24:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:27.814 09:24:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:27.814 09:24:12 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:27.814 09:24:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:27.814 09:24:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:27.814 09:24:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:27.814 09:24:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:27.814 09:24:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:27.814 09:24:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:27.814 09:24:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:27.814 09:24:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:27.814 09:24:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:27.814 09:24:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:27.814 09:24:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:14:27.814 09:24:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:29.716 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:29.716 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:29.716 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:29.716 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:29.716 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:29.717 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:29.717 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:29.717 09:24:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:29.717 09:24:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:29.717 09:24:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:29.717 09:24:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:29.717 09:24:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:29.717 09:24:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:29.717 09:24:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:29.717 09:24:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:29.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:29.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:14:29.717 00:14:29.717 --- 10.0.0.2 ping statistics --- 00:14:29.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.717 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:14:29.717 09:24:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:29.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:29.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:14:29.717 00:14:29.717 --- 10.0.0.1 ping statistics --- 00:14:29.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.717 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:14:29.717 09:24:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:29.717 09:24:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:14:29.717 09:24:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:29.717 09:24:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:29.717 09:24:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:29.717 09:24:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:29.717 09:24:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:29.717 09:24:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:29.717 09:24:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:29.717 09:24:14 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:29.717 09:24:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:29.717 09:24:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:29.717 09:24:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:29.717 09:24:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=691053 00:14:29.717 09:24:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:29.717 09:24:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 691053 00:14:29.717 09:24:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 691053 ']' 00:14:29.717 09:24:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.717 09:24:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:29.717 09:24:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.717 09:24:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:29.717 09:24:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:29.975 [2024-07-14 09:24:14.191319] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:29.975 [2024-07-14 09:24:14.191389] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:29.975 EAL: No free 2048 kB hugepages reported on node 1 00:14:29.975 [2024-07-14 09:24:14.262541] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.975 [2024-07-14 09:24:14.351875] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:29.975 [2024-07-14 09:24:14.351941] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:29.975 [2024-07-14 09:24:14.351967] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:29.975 [2024-07-14 09:24:14.351981] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:29.975 [2024-07-14 09:24:14.351993] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:29.975 [2024-07-14 09:24:14.352021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:30.233 09:24:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:30.233 09:24:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:14:30.233 09:24:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:30.233 09:24:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:30.233 09:24:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:30.233 09:24:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:30.233 09:24:14 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:30.233 09:24:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.233 09:24:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:30.233 [2024-07-14 09:24:14.500783] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:30.233 09:24:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.233 09:24:14 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:30.233 09:24:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.233 09:24:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:30.233 09:24:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.233 09:24:14 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:30.233 09:24:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.233 09:24:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:30.233 [2024-07-14 09:24:14.517001] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:30.233 09:24:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.233 09:24:14 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:30.233 09:24:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.233 09:24:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:30.233 NULL1 00:14:30.233 09:24:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.233 09:24:14 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:30.233 09:24:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.233 09:24:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:30.233 09:24:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.233 09:24:14 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:30.233 09:24:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.233 09:24:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:30.233 09:24:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.233 09:24:14 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:30.233 [2024-07-14 09:24:14.561526] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:30.233 [2024-07-14 09:24:14.561574] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid691164 ] 00:14:30.233 EAL: No free 2048 kB hugepages reported on node 1 00:14:31.165 Attached to nqn.2016-06.io.spdk:cnode1 00:14:31.165 Namespace ID: 1 size: 1GB 00:14:31.165 fused_ordering(0) 00:14:31.165 fused_ordering(1) 00:14:31.165 fused_ordering(2) 00:14:31.165 fused_ordering(3) 00:14:31.165 fused_ordering(4) 00:14:31.165 fused_ordering(5) 00:14:31.165 fused_ordering(6) 00:14:31.165 fused_ordering(7) 00:14:31.165 fused_ordering(8) 00:14:31.165 fused_ordering(9) 00:14:31.165 fused_ordering(10) 00:14:31.165 fused_ordering(11) 00:14:31.165 fused_ordering(12) 00:14:31.165 fused_ordering(13) 00:14:31.165 fused_ordering(14) 00:14:31.165 fused_ordering(15) 00:14:31.165 fused_ordering(16) 00:14:31.165 fused_ordering(17) 00:14:31.165 fused_ordering(18) 00:14:31.165 fused_ordering(19) 00:14:31.165 fused_ordering(20) 00:14:31.165 fused_ordering(21) 00:14:31.165 fused_ordering(22) 00:14:31.165 fused_ordering(23) 00:14:31.165 fused_ordering(24) 00:14:31.165 fused_ordering(25) 00:14:31.165 fused_ordering(26) 00:14:31.165 fused_ordering(27) 00:14:31.165 fused_ordering(28) 00:14:31.165 fused_ordering(29) 00:14:31.165 fused_ordering(30) 00:14:31.165 fused_ordering(31) 00:14:31.165 fused_ordering(32) 00:14:31.165 fused_ordering(33) 00:14:31.165 fused_ordering(34) 00:14:31.165 fused_ordering(35) 00:14:31.165 fused_ordering(36) 00:14:31.165 fused_ordering(37) 00:14:31.165 fused_ordering(38) 00:14:31.165 fused_ordering(39) 00:14:31.165 fused_ordering(40) 00:14:31.165 fused_ordering(41) 00:14:31.165 fused_ordering(42) 00:14:31.165 fused_ordering(43) 00:14:31.165 fused_ordering(44) 00:14:31.165 fused_ordering(45) 00:14:31.165 fused_ordering(46) 00:14:31.165 fused_ordering(47) 00:14:31.165 fused_ordering(48) 00:14:31.165 fused_ordering(49) 00:14:31.165 fused_ordering(50) 00:14:31.165 fused_ordering(51) 00:14:31.165 fused_ordering(52) 00:14:31.165 fused_ordering(53) 00:14:31.165 fused_ordering(54) 00:14:31.165 fused_ordering(55) 00:14:31.165 fused_ordering(56) 00:14:31.165 fused_ordering(57) 00:14:31.165 fused_ordering(58) 00:14:31.165 fused_ordering(59) 00:14:31.165 fused_ordering(60) 00:14:31.165 fused_ordering(61) 00:14:31.165 fused_ordering(62) 00:14:31.165 fused_ordering(63) 00:14:31.165 fused_ordering(64) 00:14:31.165 fused_ordering(65) 00:14:31.165 fused_ordering(66) 00:14:31.165 fused_ordering(67) 00:14:31.165 fused_ordering(68) 00:14:31.165 fused_ordering(69) 00:14:31.165 fused_ordering(70) 00:14:31.165 fused_ordering(71) 00:14:31.165 fused_ordering(72) 00:14:31.165 fused_ordering(73) 00:14:31.165 fused_ordering(74) 00:14:31.165 fused_ordering(75) 00:14:31.165 fused_ordering(76) 00:14:31.165 fused_ordering(77) 00:14:31.165 fused_ordering(78) 00:14:31.165 fused_ordering(79) 00:14:31.165 fused_ordering(80) 00:14:31.165 fused_ordering(81) 00:14:31.165 fused_ordering(82) 00:14:31.165 fused_ordering(83) 00:14:31.165 fused_ordering(84) 00:14:31.165 fused_ordering(85) 00:14:31.165 fused_ordering(86) 00:14:31.165 fused_ordering(87) 00:14:31.165 fused_ordering(88) 00:14:31.165 fused_ordering(89) 00:14:31.165 fused_ordering(90) 00:14:31.165 fused_ordering(91) 00:14:31.165 fused_ordering(92) 00:14:31.165 fused_ordering(93) 00:14:31.165 fused_ordering(94) 00:14:31.165 fused_ordering(95) 00:14:31.165 fused_ordering(96) 00:14:31.165 fused_ordering(97) 00:14:31.165 fused_ordering(98) 00:14:31.165 fused_ordering(99) 00:14:31.165 fused_ordering(100) 00:14:31.165 fused_ordering(101) 00:14:31.165 fused_ordering(102) 00:14:31.165 fused_ordering(103) 00:14:31.165 fused_ordering(104) 00:14:31.165 fused_ordering(105) 00:14:31.165 fused_ordering(106) 00:14:31.165 fused_ordering(107) 00:14:31.165 fused_ordering(108) 00:14:31.165 fused_ordering(109) 00:14:31.165 fused_ordering(110) 00:14:31.165 fused_ordering(111) 00:14:31.165 fused_ordering(112) 00:14:31.165 fused_ordering(113) 00:14:31.165 fused_ordering(114) 00:14:31.165 fused_ordering(115) 00:14:31.165 fused_ordering(116) 00:14:31.165 fused_ordering(117) 00:14:31.165 fused_ordering(118) 00:14:31.165 fused_ordering(119) 00:14:31.165 fused_ordering(120) 00:14:31.165 fused_ordering(121) 00:14:31.165 fused_ordering(122) 00:14:31.165 fused_ordering(123) 00:14:31.165 fused_ordering(124) 00:14:31.165 fused_ordering(125) 00:14:31.165 fused_ordering(126) 00:14:31.165 fused_ordering(127) 00:14:31.165 fused_ordering(128) 00:14:31.165 fused_ordering(129) 00:14:31.165 fused_ordering(130) 00:14:31.165 fused_ordering(131) 00:14:31.165 fused_ordering(132) 00:14:31.165 fused_ordering(133) 00:14:31.165 fused_ordering(134) 00:14:31.165 fused_ordering(135) 00:14:31.165 fused_ordering(136) 00:14:31.165 fused_ordering(137) 00:14:31.165 fused_ordering(138) 00:14:31.165 fused_ordering(139) 00:14:31.165 fused_ordering(140) 00:14:31.165 fused_ordering(141) 00:14:31.165 fused_ordering(142) 00:14:31.165 fused_ordering(143) 00:14:31.165 fused_ordering(144) 00:14:31.165 fused_ordering(145) 00:14:31.165 fused_ordering(146) 00:14:31.165 fused_ordering(147) 00:14:31.165 fused_ordering(148) 00:14:31.165 fused_ordering(149) 00:14:31.165 fused_ordering(150) 00:14:31.165 fused_ordering(151) 00:14:31.165 fused_ordering(152) 00:14:31.165 fused_ordering(153) 00:14:31.165 fused_ordering(154) 00:14:31.165 fused_ordering(155) 00:14:31.165 fused_ordering(156) 00:14:31.165 fused_ordering(157) 00:14:31.165 fused_ordering(158) 00:14:31.165 fused_ordering(159) 00:14:31.165 fused_ordering(160) 00:14:31.165 fused_ordering(161) 00:14:31.165 fused_ordering(162) 00:14:31.165 fused_ordering(163) 00:14:31.165 fused_ordering(164) 00:14:31.165 fused_ordering(165) 00:14:31.165 fused_ordering(166) 00:14:31.165 fused_ordering(167) 00:14:31.165 fused_ordering(168) 00:14:31.165 fused_ordering(169) 00:14:31.165 fused_ordering(170) 00:14:31.165 fused_ordering(171) 00:14:31.165 fused_ordering(172) 00:14:31.165 fused_ordering(173) 00:14:31.165 fused_ordering(174) 00:14:31.165 fused_ordering(175) 00:14:31.165 fused_ordering(176) 00:14:31.165 fused_ordering(177) 00:14:31.165 fused_ordering(178) 00:14:31.165 fused_ordering(179) 00:14:31.165 fused_ordering(180) 00:14:31.165 fused_ordering(181) 00:14:31.165 fused_ordering(182) 00:14:31.165 fused_ordering(183) 00:14:31.165 fused_ordering(184) 00:14:31.165 fused_ordering(185) 00:14:31.165 fused_ordering(186) 00:14:31.165 fused_ordering(187) 00:14:31.165 fused_ordering(188) 00:14:31.165 fused_ordering(189) 00:14:31.165 fused_ordering(190) 00:14:31.165 fused_ordering(191) 00:14:31.165 fused_ordering(192) 00:14:31.165 fused_ordering(193) 00:14:31.165 fused_ordering(194) 00:14:31.165 fused_ordering(195) 00:14:31.165 fused_ordering(196) 00:14:31.165 fused_ordering(197) 00:14:31.165 fused_ordering(198) 00:14:31.165 fused_ordering(199) 00:14:31.165 fused_ordering(200) 00:14:31.165 fused_ordering(201) 00:14:31.165 fused_ordering(202) 00:14:31.165 fused_ordering(203) 00:14:31.165 fused_ordering(204) 00:14:31.165 fused_ordering(205) 00:14:31.731 fused_ordering(206) 00:14:31.731 fused_ordering(207) 00:14:31.731 fused_ordering(208) 00:14:31.731 fused_ordering(209) 00:14:31.731 fused_ordering(210) 00:14:31.731 fused_ordering(211) 00:14:31.731 fused_ordering(212) 00:14:31.731 fused_ordering(213) 00:14:31.731 fused_ordering(214) 00:14:31.731 fused_ordering(215) 00:14:31.731 fused_ordering(216) 00:14:31.731 fused_ordering(217) 00:14:31.731 fused_ordering(218) 00:14:31.731 fused_ordering(219) 00:14:31.731 fused_ordering(220) 00:14:31.731 fused_ordering(221) 00:14:31.731 fused_ordering(222) 00:14:31.731 fused_ordering(223) 00:14:31.731 fused_ordering(224) 00:14:31.731 fused_ordering(225) 00:14:31.731 fused_ordering(226) 00:14:31.731 fused_ordering(227) 00:14:31.731 fused_ordering(228) 00:14:31.731 fused_ordering(229) 00:14:31.731 fused_ordering(230) 00:14:31.731 fused_ordering(231) 00:14:31.731 fused_ordering(232) 00:14:31.731 fused_ordering(233) 00:14:31.731 fused_ordering(234) 00:14:31.731 fused_ordering(235) 00:14:31.731 fused_ordering(236) 00:14:31.731 fused_ordering(237) 00:14:31.731 fused_ordering(238) 00:14:31.731 fused_ordering(239) 00:14:31.731 fused_ordering(240) 00:14:31.731 fused_ordering(241) 00:14:31.731 fused_ordering(242) 00:14:31.731 fused_ordering(243) 00:14:31.731 fused_ordering(244) 00:14:31.731 fused_ordering(245) 00:14:31.731 fused_ordering(246) 00:14:31.731 fused_ordering(247) 00:14:31.731 fused_ordering(248) 00:14:31.731 fused_ordering(249) 00:14:31.731 fused_ordering(250) 00:14:31.731 fused_ordering(251) 00:14:31.731 fused_ordering(252) 00:14:31.731 fused_ordering(253) 00:14:31.731 fused_ordering(254) 00:14:31.731 fused_ordering(255) 00:14:31.731 fused_ordering(256) 00:14:31.731 fused_ordering(257) 00:14:31.731 fused_ordering(258) 00:14:31.731 fused_ordering(259) 00:14:31.731 fused_ordering(260) 00:14:31.731 fused_ordering(261) 00:14:31.731 fused_ordering(262) 00:14:31.731 fused_ordering(263) 00:14:31.731 fused_ordering(264) 00:14:31.731 fused_ordering(265) 00:14:31.731 fused_ordering(266) 00:14:31.731 fused_ordering(267) 00:14:31.731 fused_ordering(268) 00:14:31.731 fused_ordering(269) 00:14:31.731 fused_ordering(270) 00:14:31.731 fused_ordering(271) 00:14:31.731 fused_ordering(272) 00:14:31.731 fused_ordering(273) 00:14:31.731 fused_ordering(274) 00:14:31.731 fused_ordering(275) 00:14:31.731 fused_ordering(276) 00:14:31.731 fused_ordering(277) 00:14:31.731 fused_ordering(278) 00:14:31.731 fused_ordering(279) 00:14:31.731 fused_ordering(280) 00:14:31.731 fused_ordering(281) 00:14:31.731 fused_ordering(282) 00:14:31.731 fused_ordering(283) 00:14:31.731 fused_ordering(284) 00:14:31.731 fused_ordering(285) 00:14:31.731 fused_ordering(286) 00:14:31.731 fused_ordering(287) 00:14:31.731 fused_ordering(288) 00:14:31.731 fused_ordering(289) 00:14:31.731 fused_ordering(290) 00:14:31.731 fused_ordering(291) 00:14:31.731 fused_ordering(292) 00:14:31.731 fused_ordering(293) 00:14:31.731 fused_ordering(294) 00:14:31.731 fused_ordering(295) 00:14:31.731 fused_ordering(296) 00:14:31.731 fused_ordering(297) 00:14:31.731 fused_ordering(298) 00:14:31.731 fused_ordering(299) 00:14:31.731 fused_ordering(300) 00:14:31.731 fused_ordering(301) 00:14:31.731 fused_ordering(302) 00:14:31.731 fused_ordering(303) 00:14:31.731 fused_ordering(304) 00:14:31.731 fused_ordering(305) 00:14:31.731 fused_ordering(306) 00:14:31.731 fused_ordering(307) 00:14:31.731 fused_ordering(308) 00:14:31.731 fused_ordering(309) 00:14:31.731 fused_ordering(310) 00:14:31.731 fused_ordering(311) 00:14:31.731 fused_ordering(312) 00:14:31.731 fused_ordering(313) 00:14:31.731 fused_ordering(314) 00:14:31.731 fused_ordering(315) 00:14:31.731 fused_ordering(316) 00:14:31.731 fused_ordering(317) 00:14:31.731 fused_ordering(318) 00:14:31.731 fused_ordering(319) 00:14:31.731 fused_ordering(320) 00:14:31.731 fused_ordering(321) 00:14:31.731 fused_ordering(322) 00:14:31.731 fused_ordering(323) 00:14:31.731 fused_ordering(324) 00:14:31.731 fused_ordering(325) 00:14:31.731 fused_ordering(326) 00:14:31.731 fused_ordering(327) 00:14:31.731 fused_ordering(328) 00:14:31.731 fused_ordering(329) 00:14:31.731 fused_ordering(330) 00:14:31.731 fused_ordering(331) 00:14:31.731 fused_ordering(332) 00:14:31.731 fused_ordering(333) 00:14:31.731 fused_ordering(334) 00:14:31.731 fused_ordering(335) 00:14:31.731 fused_ordering(336) 00:14:31.731 fused_ordering(337) 00:14:31.731 fused_ordering(338) 00:14:31.731 fused_ordering(339) 00:14:31.731 fused_ordering(340) 00:14:31.731 fused_ordering(341) 00:14:31.731 fused_ordering(342) 00:14:31.731 fused_ordering(343) 00:14:31.731 fused_ordering(344) 00:14:31.731 fused_ordering(345) 00:14:31.731 fused_ordering(346) 00:14:31.731 fused_ordering(347) 00:14:31.731 fused_ordering(348) 00:14:31.731 fused_ordering(349) 00:14:31.731 fused_ordering(350) 00:14:31.731 fused_ordering(351) 00:14:31.731 fused_ordering(352) 00:14:31.731 fused_ordering(353) 00:14:31.731 fused_ordering(354) 00:14:31.731 fused_ordering(355) 00:14:31.731 fused_ordering(356) 00:14:31.731 fused_ordering(357) 00:14:31.731 fused_ordering(358) 00:14:31.731 fused_ordering(359) 00:14:31.731 fused_ordering(360) 00:14:31.731 fused_ordering(361) 00:14:31.731 fused_ordering(362) 00:14:31.731 fused_ordering(363) 00:14:31.731 fused_ordering(364) 00:14:31.731 fused_ordering(365) 00:14:31.731 fused_ordering(366) 00:14:31.731 fused_ordering(367) 00:14:31.731 fused_ordering(368) 00:14:31.731 fused_ordering(369) 00:14:31.731 fused_ordering(370) 00:14:31.731 fused_ordering(371) 00:14:31.731 fused_ordering(372) 00:14:31.731 fused_ordering(373) 00:14:31.731 fused_ordering(374) 00:14:31.731 fused_ordering(375) 00:14:31.731 fused_ordering(376) 00:14:31.731 fused_ordering(377) 00:14:31.731 fused_ordering(378) 00:14:31.731 fused_ordering(379) 00:14:31.731 fused_ordering(380) 00:14:31.731 fused_ordering(381) 00:14:31.731 fused_ordering(382) 00:14:31.731 fused_ordering(383) 00:14:31.731 fused_ordering(384) 00:14:31.731 fused_ordering(385) 00:14:31.731 fused_ordering(386) 00:14:31.731 fused_ordering(387) 00:14:31.731 fused_ordering(388) 00:14:31.731 fused_ordering(389) 00:14:31.731 fused_ordering(390) 00:14:31.731 fused_ordering(391) 00:14:31.731 fused_ordering(392) 00:14:31.731 fused_ordering(393) 00:14:31.731 fused_ordering(394) 00:14:31.731 fused_ordering(395) 00:14:31.731 fused_ordering(396) 00:14:31.731 fused_ordering(397) 00:14:31.731 fused_ordering(398) 00:14:31.731 fused_ordering(399) 00:14:31.731 fused_ordering(400) 00:14:31.731 fused_ordering(401) 00:14:31.731 fused_ordering(402) 00:14:31.731 fused_ordering(403) 00:14:31.731 fused_ordering(404) 00:14:31.731 fused_ordering(405) 00:14:31.731 fused_ordering(406) 00:14:31.731 fused_ordering(407) 00:14:31.732 fused_ordering(408) 00:14:31.732 fused_ordering(409) 00:14:31.732 fused_ordering(410) 00:14:32.664 fused_ordering(411) 00:14:32.664 fused_ordering(412) 00:14:32.664 fused_ordering(413) 00:14:32.664 fused_ordering(414) 00:14:32.664 fused_ordering(415) 00:14:32.664 fused_ordering(416) 00:14:32.664 fused_ordering(417) 00:14:32.664 fused_ordering(418) 00:14:32.664 fused_ordering(419) 00:14:32.664 fused_ordering(420) 00:14:32.664 fused_ordering(421) 00:14:32.664 fused_ordering(422) 00:14:32.664 fused_ordering(423) 00:14:32.664 fused_ordering(424) 00:14:32.664 fused_ordering(425) 00:14:32.664 fused_ordering(426) 00:14:32.664 fused_ordering(427) 00:14:32.664 fused_ordering(428) 00:14:32.664 fused_ordering(429) 00:14:32.665 fused_ordering(430) 00:14:32.665 fused_ordering(431) 00:14:32.665 fused_ordering(432) 00:14:32.665 fused_ordering(433) 00:14:32.665 fused_ordering(434) 00:14:32.665 fused_ordering(435) 00:14:32.665 fused_ordering(436) 00:14:32.665 fused_ordering(437) 00:14:32.665 fused_ordering(438) 00:14:32.665 fused_ordering(439) 00:14:32.665 fused_ordering(440) 00:14:32.665 fused_ordering(441) 00:14:32.665 fused_ordering(442) 00:14:32.665 fused_ordering(443) 00:14:32.665 fused_ordering(444) 00:14:32.665 fused_ordering(445) 00:14:32.665 fused_ordering(446) 00:14:32.665 fused_ordering(447) 00:14:32.665 fused_ordering(448) 00:14:32.665 fused_ordering(449) 00:14:32.665 fused_ordering(450) 00:14:32.665 fused_ordering(451) 00:14:32.665 fused_ordering(452) 00:14:32.665 fused_ordering(453) 00:14:32.665 fused_ordering(454) 00:14:32.665 fused_ordering(455) 00:14:32.665 fused_ordering(456) 00:14:32.665 fused_ordering(457) 00:14:32.665 fused_ordering(458) 00:14:32.665 fused_ordering(459) 00:14:32.665 fused_ordering(460) 00:14:32.665 fused_ordering(461) 00:14:32.665 fused_ordering(462) 00:14:32.665 fused_ordering(463) 00:14:32.665 fused_ordering(464) 00:14:32.665 fused_ordering(465) 00:14:32.665 fused_ordering(466) 00:14:32.665 fused_ordering(467) 00:14:32.665 fused_ordering(468) 00:14:32.665 fused_ordering(469) 00:14:32.665 fused_ordering(470) 00:14:32.665 fused_ordering(471) 00:14:32.665 fused_ordering(472) 00:14:32.665 fused_ordering(473) 00:14:32.665 fused_ordering(474) 00:14:32.665 fused_ordering(475) 00:14:32.665 fused_ordering(476) 00:14:32.665 fused_ordering(477) 00:14:32.665 fused_ordering(478) 00:14:32.665 fused_ordering(479) 00:14:32.665 fused_ordering(480) 00:14:32.665 fused_ordering(481) 00:14:32.665 fused_ordering(482) 00:14:32.665 fused_ordering(483) 00:14:32.665 fused_ordering(484) 00:14:32.665 fused_ordering(485) 00:14:32.665 fused_ordering(486) 00:14:32.665 fused_ordering(487) 00:14:32.665 fused_ordering(488) 00:14:32.665 fused_ordering(489) 00:14:32.665 fused_ordering(490) 00:14:32.665 fused_ordering(491) 00:14:32.665 fused_ordering(492) 00:14:32.665 fused_ordering(493) 00:14:32.665 fused_ordering(494) 00:14:32.665 fused_ordering(495) 00:14:32.665 fused_ordering(496) 00:14:32.665 fused_ordering(497) 00:14:32.665 fused_ordering(498) 00:14:32.665 fused_ordering(499) 00:14:32.665 fused_ordering(500) 00:14:32.665 fused_ordering(501) 00:14:32.665 fused_ordering(502) 00:14:32.665 fused_ordering(503) 00:14:32.665 fused_ordering(504) 00:14:32.665 fused_ordering(505) 00:14:32.665 fused_ordering(506) 00:14:32.665 fused_ordering(507) 00:14:32.665 fused_ordering(508) 00:14:32.665 fused_ordering(509) 00:14:32.665 fused_ordering(510) 00:14:32.665 fused_ordering(511) 00:14:32.665 fused_ordering(512) 00:14:32.665 fused_ordering(513) 00:14:32.665 fused_ordering(514) 00:14:32.665 fused_ordering(515) 00:14:32.665 fused_ordering(516) 00:14:32.665 fused_ordering(517) 00:14:32.665 fused_ordering(518) 00:14:32.665 fused_ordering(519) 00:14:32.665 fused_ordering(520) 00:14:32.665 fused_ordering(521) 00:14:32.665 fused_ordering(522) 00:14:32.665 fused_ordering(523) 00:14:32.665 fused_ordering(524) 00:14:32.665 fused_ordering(525) 00:14:32.665 fused_ordering(526) 00:14:32.665 fused_ordering(527) 00:14:32.665 fused_ordering(528) 00:14:32.665 fused_ordering(529) 00:14:32.665 fused_ordering(530) 00:14:32.665 fused_ordering(531) 00:14:32.665 fused_ordering(532) 00:14:32.665 fused_ordering(533) 00:14:32.665 fused_ordering(534) 00:14:32.665 fused_ordering(535) 00:14:32.665 fused_ordering(536) 00:14:32.665 fused_ordering(537) 00:14:32.665 fused_ordering(538) 00:14:32.665 fused_ordering(539) 00:14:32.665 fused_ordering(540) 00:14:32.665 fused_ordering(541) 00:14:32.665 fused_ordering(542) 00:14:32.665 fused_ordering(543) 00:14:32.665 fused_ordering(544) 00:14:32.665 fused_ordering(545) 00:14:32.665 fused_ordering(546) 00:14:32.665 fused_ordering(547) 00:14:32.665 fused_ordering(548) 00:14:32.665 fused_ordering(549) 00:14:32.665 fused_ordering(550) 00:14:32.665 fused_ordering(551) 00:14:32.665 fused_ordering(552) 00:14:32.665 fused_ordering(553) 00:14:32.665 fused_ordering(554) 00:14:32.665 fused_ordering(555) 00:14:32.665 fused_ordering(556) 00:14:32.665 fused_ordering(557) 00:14:32.665 fused_ordering(558) 00:14:32.665 fused_ordering(559) 00:14:32.665 fused_ordering(560) 00:14:32.665 fused_ordering(561) 00:14:32.665 fused_ordering(562) 00:14:32.665 fused_ordering(563) 00:14:32.665 fused_ordering(564) 00:14:32.665 fused_ordering(565) 00:14:32.665 fused_ordering(566) 00:14:32.665 fused_ordering(567) 00:14:32.665 fused_ordering(568) 00:14:32.665 fused_ordering(569) 00:14:32.665 fused_ordering(570) 00:14:32.665 fused_ordering(571) 00:14:32.665 fused_ordering(572) 00:14:32.665 fused_ordering(573) 00:14:32.665 fused_ordering(574) 00:14:32.665 fused_ordering(575) 00:14:32.665 fused_ordering(576) 00:14:32.665 fused_ordering(577) 00:14:32.665 fused_ordering(578) 00:14:32.665 fused_ordering(579) 00:14:32.665 fused_ordering(580) 00:14:32.665 fused_ordering(581) 00:14:32.665 fused_ordering(582) 00:14:32.665 fused_ordering(583) 00:14:32.665 fused_ordering(584) 00:14:32.665 fused_ordering(585) 00:14:32.665 fused_ordering(586) 00:14:32.665 fused_ordering(587) 00:14:32.665 fused_ordering(588) 00:14:32.665 fused_ordering(589) 00:14:32.665 fused_ordering(590) 00:14:32.665 fused_ordering(591) 00:14:32.665 fused_ordering(592) 00:14:32.665 fused_ordering(593) 00:14:32.665 fused_ordering(594) 00:14:32.665 fused_ordering(595) 00:14:32.665 fused_ordering(596) 00:14:32.665 fused_ordering(597) 00:14:32.665 fused_ordering(598) 00:14:32.665 fused_ordering(599) 00:14:32.665 fused_ordering(600) 00:14:32.665 fused_ordering(601) 00:14:32.665 fused_ordering(602) 00:14:32.665 fused_ordering(603) 00:14:32.665 fused_ordering(604) 00:14:32.665 fused_ordering(605) 00:14:32.665 fused_ordering(606) 00:14:32.665 fused_ordering(607) 00:14:32.665 fused_ordering(608) 00:14:32.665 fused_ordering(609) 00:14:32.665 fused_ordering(610) 00:14:32.665 fused_ordering(611) 00:14:32.665 fused_ordering(612) 00:14:32.665 fused_ordering(613) 00:14:32.665 fused_ordering(614) 00:14:32.665 fused_ordering(615) 00:14:33.231 fused_ordering(616) 00:14:33.231 fused_ordering(617) 00:14:33.231 fused_ordering(618) 00:14:33.231 fused_ordering(619) 00:14:33.231 fused_ordering(620) 00:14:33.231 fused_ordering(621) 00:14:33.231 fused_ordering(622) 00:14:33.231 fused_ordering(623) 00:14:33.231 fused_ordering(624) 00:14:33.231 fused_ordering(625) 00:14:33.231 fused_ordering(626) 00:14:33.231 fused_ordering(627) 00:14:33.231 fused_ordering(628) 00:14:33.231 fused_ordering(629) 00:14:33.231 fused_ordering(630) 00:14:33.231 fused_ordering(631) 00:14:33.231 fused_ordering(632) 00:14:33.231 fused_ordering(633) 00:14:33.231 fused_ordering(634) 00:14:33.231 fused_ordering(635) 00:14:33.231 fused_ordering(636) 00:14:33.231 fused_ordering(637) 00:14:33.231 fused_ordering(638) 00:14:33.231 fused_ordering(639) 00:14:33.231 fused_ordering(640) 00:14:33.231 fused_ordering(641) 00:14:33.231 fused_ordering(642) 00:14:33.231 fused_ordering(643) 00:14:33.231 fused_ordering(644) 00:14:33.231 fused_ordering(645) 00:14:33.231 fused_ordering(646) 00:14:33.231 fused_ordering(647) 00:14:33.231 fused_ordering(648) 00:14:33.231 fused_ordering(649) 00:14:33.231 fused_ordering(650) 00:14:33.231 fused_ordering(651) 00:14:33.231 fused_ordering(652) 00:14:33.231 fused_ordering(653) 00:14:33.231 fused_ordering(654) 00:14:33.231 fused_ordering(655) 00:14:33.231 fused_ordering(656) 00:14:33.231 fused_ordering(657) 00:14:33.231 fused_ordering(658) 00:14:33.231 fused_ordering(659) 00:14:33.232 fused_ordering(660) 00:14:33.232 fused_ordering(661) 00:14:33.232 fused_ordering(662) 00:14:33.232 fused_ordering(663) 00:14:33.232 fused_ordering(664) 00:14:33.232 fused_ordering(665) 00:14:33.232 fused_ordering(666) 00:14:33.232 fused_ordering(667) 00:14:33.232 fused_ordering(668) 00:14:33.232 fused_ordering(669) 00:14:33.232 fused_ordering(670) 00:14:33.232 fused_ordering(671) 00:14:33.232 fused_ordering(672) 00:14:33.232 fused_ordering(673) 00:14:33.232 fused_ordering(674) 00:14:33.232 fused_ordering(675) 00:14:33.232 fused_ordering(676) 00:14:33.232 fused_ordering(677) 00:14:33.232 fused_ordering(678) 00:14:33.232 fused_ordering(679) 00:14:33.232 fused_ordering(680) 00:14:33.232 fused_ordering(681) 00:14:33.232 fused_ordering(682) 00:14:33.232 fused_ordering(683) 00:14:33.232 fused_ordering(684) 00:14:33.232 fused_ordering(685) 00:14:33.232 fused_ordering(686) 00:14:33.232 fused_ordering(687) 00:14:33.232 fused_ordering(688) 00:14:33.232 fused_ordering(689) 00:14:33.232 fused_ordering(690) 00:14:33.232 fused_ordering(691) 00:14:33.232 fused_ordering(692) 00:14:33.232 fused_ordering(693) 00:14:33.232 fused_ordering(694) 00:14:33.232 fused_ordering(695) 00:14:33.232 fused_ordering(696) 00:14:33.232 fused_ordering(697) 00:14:33.232 fused_ordering(698) 00:14:33.232 fused_ordering(699) 00:14:33.232 fused_ordering(700) 00:14:33.232 fused_ordering(701) 00:14:33.232 fused_ordering(702) 00:14:33.232 fused_ordering(703) 00:14:33.232 fused_ordering(704) 00:14:33.232 fused_ordering(705) 00:14:33.232 fused_ordering(706) 00:14:33.232 fused_ordering(707) 00:14:33.232 fused_ordering(708) 00:14:33.232 fused_ordering(709) 00:14:33.232 fused_ordering(710) 00:14:33.232 fused_ordering(711) 00:14:33.232 fused_ordering(712) 00:14:33.232 fused_ordering(713) 00:14:33.232 fused_ordering(714) 00:14:33.232 fused_ordering(715) 00:14:33.232 fused_ordering(716) 00:14:33.232 fused_ordering(717) 00:14:33.232 fused_ordering(718) 00:14:33.232 fused_ordering(719) 00:14:33.232 fused_ordering(720) 00:14:33.232 fused_ordering(721) 00:14:33.232 fused_ordering(722) 00:14:33.232 fused_ordering(723) 00:14:33.232 fused_ordering(724) 00:14:33.232 fused_ordering(725) 00:14:33.232 fused_ordering(726) 00:14:33.232 fused_ordering(727) 00:14:33.232 fused_ordering(728) 00:14:33.232 fused_ordering(729) 00:14:33.232 fused_ordering(730) 00:14:33.232 fused_ordering(731) 00:14:33.232 fused_ordering(732) 00:14:33.232 fused_ordering(733) 00:14:33.232 fused_ordering(734) 00:14:33.232 fused_ordering(735) 00:14:33.232 fused_ordering(736) 00:14:33.232 fused_ordering(737) 00:14:33.232 fused_ordering(738) 00:14:33.232 fused_ordering(739) 00:14:33.232 fused_ordering(740) 00:14:33.232 fused_ordering(741) 00:14:33.232 fused_ordering(742) 00:14:33.232 fused_ordering(743) 00:14:33.232 fused_ordering(744) 00:14:33.232 fused_ordering(745) 00:14:33.232 fused_ordering(746) 00:14:33.232 fused_ordering(747) 00:14:33.232 fused_ordering(748) 00:14:33.232 fused_ordering(749) 00:14:33.232 fused_ordering(750) 00:14:33.232 fused_ordering(751) 00:14:33.232 fused_ordering(752) 00:14:33.232 fused_ordering(753) 00:14:33.232 fused_ordering(754) 00:14:33.232 fused_ordering(755) 00:14:33.232 fused_ordering(756) 00:14:33.232 fused_ordering(757) 00:14:33.232 fused_ordering(758) 00:14:33.232 fused_ordering(759) 00:14:33.232 fused_ordering(760) 00:14:33.232 fused_ordering(761) 00:14:33.232 fused_ordering(762) 00:14:33.232 fused_ordering(763) 00:14:33.232 fused_ordering(764) 00:14:33.232 fused_ordering(765) 00:14:33.232 fused_ordering(766) 00:14:33.232 fused_ordering(767) 00:14:33.232 fused_ordering(768) 00:14:33.232 fused_ordering(769) 00:14:33.232 fused_ordering(770) 00:14:33.232 fused_ordering(771) 00:14:33.232 fused_ordering(772) 00:14:33.232 fused_ordering(773) 00:14:33.232 fused_ordering(774) 00:14:33.232 fused_ordering(775) 00:14:33.232 fused_ordering(776) 00:14:33.232 fused_ordering(777) 00:14:33.232 fused_ordering(778) 00:14:33.232 fused_ordering(779) 00:14:33.232 fused_ordering(780) 00:14:33.232 fused_ordering(781) 00:14:33.232 fused_ordering(782) 00:14:33.232 fused_ordering(783) 00:14:33.232 fused_ordering(784) 00:14:33.232 fused_ordering(785) 00:14:33.232 fused_ordering(786) 00:14:33.232 fused_ordering(787) 00:14:33.232 fused_ordering(788) 00:14:33.232 fused_ordering(789) 00:14:33.232 fused_ordering(790) 00:14:33.232 fused_ordering(791) 00:14:33.232 fused_ordering(792) 00:14:33.232 fused_ordering(793) 00:14:33.232 fused_ordering(794) 00:14:33.232 fused_ordering(795) 00:14:33.232 fused_ordering(796) 00:14:33.232 fused_ordering(797) 00:14:33.232 fused_ordering(798) 00:14:33.232 fused_ordering(799) 00:14:33.232 fused_ordering(800) 00:14:33.232 fused_ordering(801) 00:14:33.232 fused_ordering(802) 00:14:33.232 fused_ordering(803) 00:14:33.232 fused_ordering(804) 00:14:33.232 fused_ordering(805) 00:14:33.232 fused_ordering(806) 00:14:33.232 fused_ordering(807) 00:14:33.232 fused_ordering(808) 00:14:33.232 fused_ordering(809) 00:14:33.232 fused_ordering(810) 00:14:33.232 fused_ordering(811) 00:14:33.232 fused_ordering(812) 00:14:33.232 fused_ordering(813) 00:14:33.232 fused_ordering(814) 00:14:33.232 fused_ordering(815) 00:14:33.232 fused_ordering(816) 00:14:33.232 fused_ordering(817) 00:14:33.232 fused_ordering(818) 00:14:33.232 fused_ordering(819) 00:14:33.232 fused_ordering(820) 00:14:34.166 fused_ordering(821) 00:14:34.166 fused_ordering(822) 00:14:34.166 fused_ordering(823) 00:14:34.166 fused_ordering(824) 00:14:34.166 fused_ordering(825) 00:14:34.166 fused_ordering(826) 00:14:34.166 fused_ordering(827) 00:14:34.166 fused_ordering(828) 00:14:34.166 fused_ordering(829) 00:14:34.166 fused_ordering(830) 00:14:34.166 fused_ordering(831) 00:14:34.166 fused_ordering(832) 00:14:34.166 fused_ordering(833) 00:14:34.166 fused_ordering(834) 00:14:34.166 fused_ordering(835) 00:14:34.166 fused_ordering(836) 00:14:34.166 fused_ordering(837) 00:14:34.166 fused_ordering(838) 00:14:34.166 fused_ordering(839) 00:14:34.166 fused_ordering(840) 00:14:34.166 fused_ordering(841) 00:14:34.166 fused_ordering(842) 00:14:34.166 fused_ordering(843) 00:14:34.166 fused_ordering(844) 00:14:34.166 fused_ordering(845) 00:14:34.166 fused_ordering(846) 00:14:34.166 fused_ordering(847) 00:14:34.166 fused_ordering(848) 00:14:34.166 fused_ordering(849) 00:14:34.166 fused_ordering(850) 00:14:34.166 fused_ordering(851) 00:14:34.166 fused_ordering(852) 00:14:34.166 fused_ordering(853) 00:14:34.166 fused_ordering(854) 00:14:34.166 fused_ordering(855) 00:14:34.166 fused_ordering(856) 00:14:34.166 fused_ordering(857) 00:14:34.166 fused_ordering(858) 00:14:34.166 fused_ordering(859) 00:14:34.166 fused_ordering(860) 00:14:34.166 fused_ordering(861) 00:14:34.166 fused_ordering(862) 00:14:34.166 fused_ordering(863) 00:14:34.166 fused_ordering(864) 00:14:34.166 fused_ordering(865) 00:14:34.166 fused_ordering(866) 00:14:34.166 fused_ordering(867) 00:14:34.166 fused_ordering(868) 00:14:34.166 fused_ordering(869) 00:14:34.166 fused_ordering(870) 00:14:34.166 fused_ordering(871) 00:14:34.166 fused_ordering(872) 00:14:34.166 fused_ordering(873) 00:14:34.166 fused_ordering(874) 00:14:34.166 fused_ordering(875) 00:14:34.166 fused_ordering(876) 00:14:34.166 fused_ordering(877) 00:14:34.166 fused_ordering(878) 00:14:34.166 fused_ordering(879) 00:14:34.166 fused_ordering(880) 00:14:34.166 fused_ordering(881) 00:14:34.166 fused_ordering(882) 00:14:34.166 fused_ordering(883) 00:14:34.166 fused_ordering(884) 00:14:34.166 fused_ordering(885) 00:14:34.166 fused_ordering(886) 00:14:34.166 fused_ordering(887) 00:14:34.166 fused_ordering(888) 00:14:34.166 fused_ordering(889) 00:14:34.166 fused_ordering(890) 00:14:34.166 fused_ordering(891) 00:14:34.166 fused_ordering(892) 00:14:34.166 fused_ordering(893) 00:14:34.166 fused_ordering(894) 00:14:34.166 fused_ordering(895) 00:14:34.166 fused_ordering(896) 00:14:34.166 fused_ordering(897) 00:14:34.166 fused_ordering(898) 00:14:34.166 fused_ordering(899) 00:14:34.166 fused_ordering(900) 00:14:34.166 fused_ordering(901) 00:14:34.166 fused_ordering(902) 00:14:34.166 fused_ordering(903) 00:14:34.166 fused_ordering(904) 00:14:34.166 fused_ordering(905) 00:14:34.166 fused_ordering(906) 00:14:34.166 fused_ordering(907) 00:14:34.166 fused_ordering(908) 00:14:34.166 fused_ordering(909) 00:14:34.166 fused_ordering(910) 00:14:34.166 fused_ordering(911) 00:14:34.166 fused_ordering(912) 00:14:34.166 fused_ordering(913) 00:14:34.166 fused_ordering(914) 00:14:34.166 fused_ordering(915) 00:14:34.166 fused_ordering(916) 00:14:34.166 fused_ordering(917) 00:14:34.166 fused_ordering(918) 00:14:34.166 fused_ordering(919) 00:14:34.166 fused_ordering(920) 00:14:34.166 fused_ordering(921) 00:14:34.166 fused_ordering(922) 00:14:34.166 fused_ordering(923) 00:14:34.166 fused_ordering(924) 00:14:34.166 fused_ordering(925) 00:14:34.166 fused_ordering(926) 00:14:34.166 fused_ordering(927) 00:14:34.166 fused_ordering(928) 00:14:34.166 fused_ordering(929) 00:14:34.166 fused_ordering(930) 00:14:34.166 fused_ordering(931) 00:14:34.166 fused_ordering(932) 00:14:34.166 fused_ordering(933) 00:14:34.166 fused_ordering(934) 00:14:34.166 fused_ordering(935) 00:14:34.166 fused_ordering(936) 00:14:34.166 fused_ordering(937) 00:14:34.166 fused_ordering(938) 00:14:34.166 fused_ordering(939) 00:14:34.166 fused_ordering(940) 00:14:34.166 fused_ordering(941) 00:14:34.166 fused_ordering(942) 00:14:34.166 fused_ordering(943) 00:14:34.166 fused_ordering(944) 00:14:34.166 fused_ordering(945) 00:14:34.166 fused_ordering(946) 00:14:34.166 fused_ordering(947) 00:14:34.166 fused_ordering(948) 00:14:34.166 fused_ordering(949) 00:14:34.166 fused_ordering(950) 00:14:34.166 fused_ordering(951) 00:14:34.166 fused_ordering(952) 00:14:34.166 fused_ordering(953) 00:14:34.166 fused_ordering(954) 00:14:34.166 fused_ordering(955) 00:14:34.166 fused_ordering(956) 00:14:34.166 fused_ordering(957) 00:14:34.166 fused_ordering(958) 00:14:34.166 fused_ordering(959) 00:14:34.166 fused_ordering(960) 00:14:34.166 fused_ordering(961) 00:14:34.166 fused_ordering(962) 00:14:34.166 fused_ordering(963) 00:14:34.166 fused_ordering(964) 00:14:34.166 fused_ordering(965) 00:14:34.166 fused_ordering(966) 00:14:34.166 fused_ordering(967) 00:14:34.166 fused_ordering(968) 00:14:34.166 fused_ordering(969) 00:14:34.166 fused_ordering(970) 00:14:34.166 fused_ordering(971) 00:14:34.166 fused_ordering(972) 00:14:34.166 fused_ordering(973) 00:14:34.166 fused_ordering(974) 00:14:34.166 fused_ordering(975) 00:14:34.166 fused_ordering(976) 00:14:34.166 fused_ordering(977) 00:14:34.166 fused_ordering(978) 00:14:34.166 fused_ordering(979) 00:14:34.166 fused_ordering(980) 00:14:34.166 fused_ordering(981) 00:14:34.166 fused_ordering(982) 00:14:34.166 fused_ordering(983) 00:14:34.166 fused_ordering(984) 00:14:34.166 fused_ordering(985) 00:14:34.166 fused_ordering(986) 00:14:34.166 fused_ordering(987) 00:14:34.166 fused_ordering(988) 00:14:34.166 fused_ordering(989) 00:14:34.166 fused_ordering(990) 00:14:34.166 fused_ordering(991) 00:14:34.166 fused_ordering(992) 00:14:34.166 fused_ordering(993) 00:14:34.166 fused_ordering(994) 00:14:34.166 fused_ordering(995) 00:14:34.166 fused_ordering(996) 00:14:34.166 fused_ordering(997) 00:14:34.166 fused_ordering(998) 00:14:34.166 fused_ordering(999) 00:14:34.166 fused_ordering(1000) 00:14:34.166 fused_ordering(1001) 00:14:34.166 fused_ordering(1002) 00:14:34.166 fused_ordering(1003) 00:14:34.166 fused_ordering(1004) 00:14:34.166 fused_ordering(1005) 00:14:34.166 fused_ordering(1006) 00:14:34.166 fused_ordering(1007) 00:14:34.166 fused_ordering(1008) 00:14:34.166 fused_ordering(1009) 00:14:34.166 fused_ordering(1010) 00:14:34.166 fused_ordering(1011) 00:14:34.166 fused_ordering(1012) 00:14:34.166 fused_ordering(1013) 00:14:34.166 fused_ordering(1014) 00:14:34.166 fused_ordering(1015) 00:14:34.166 fused_ordering(1016) 00:14:34.166 fused_ordering(1017) 00:14:34.166 fused_ordering(1018) 00:14:34.166 fused_ordering(1019) 00:14:34.166 fused_ordering(1020) 00:14:34.166 fused_ordering(1021) 00:14:34.166 fused_ordering(1022) 00:14:34.166 fused_ordering(1023) 00:14:34.166 09:24:18 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:34.166 09:24:18 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:34.166 09:24:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:34.166 09:24:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:14:34.166 09:24:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:34.166 09:24:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:14:34.167 09:24:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:34.167 09:24:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:34.167 rmmod nvme_tcp 00:14:34.167 rmmod nvme_fabrics 00:14:34.425 rmmod nvme_keyring 00:14:34.425 09:24:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:34.425 09:24:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:14:34.425 09:24:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:14:34.425 09:24:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 691053 ']' 00:14:34.425 09:24:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 691053 00:14:34.425 09:24:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 691053 ']' 00:14:34.425 09:24:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 691053 00:14:34.425 09:24:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:14:34.425 09:24:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:34.425 09:24:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 691053 00:14:34.425 09:24:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:34.425 09:24:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:34.425 09:24:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 691053' 00:14:34.425 killing process with pid 691053 00:14:34.425 09:24:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 691053 00:14:34.425 09:24:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 691053 00:14:34.684 09:24:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:34.684 09:24:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:34.684 09:24:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:34.684 09:24:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:34.684 09:24:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:34.684 09:24:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.684 09:24:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:34.684 09:24:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:36.603 09:24:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:36.603 00:14:36.603 real 0m8.997s 00:14:36.603 user 0m6.559s 00:14:36.603 sys 0m4.682s 00:14:36.603 09:24:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:36.604 09:24:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:36.604 ************************************ 00:14:36.604 END TEST nvmf_fused_ordering 00:14:36.604 ************************************ 00:14:36.604 09:24:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:36.604 09:24:20 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:36.604 09:24:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:36.604 09:24:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:36.604 09:24:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:36.604 ************************************ 00:14:36.604 START TEST nvmf_delete_subsystem 00:14:36.604 ************************************ 00:14:36.604 09:24:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:36.604 * Looking for test storage... 00:14:36.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:36.604 09:24:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:36.604 09:24:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:14:36.604 09:24:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:36.604 09:24:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:36.604 09:24:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:36.604 09:24:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:36.604 09:24:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:36.604 09:24:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:36.604 09:24:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:36.604 09:24:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:36.604 09:24:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:36.604 09:24:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:36.604 09:24:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:36.604 09:24:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:36.604 09:24:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:36.604 09:24:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:36.604 09:24:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:36.604 09:24:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:36.604 09:24:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:36.604 09:24:21 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:36.604 09:24:21 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:36.604 09:24:21 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:36.604 09:24:21 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.604 09:24:21 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.605 09:24:21 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.605 09:24:21 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:14:36.605 09:24:21 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.605 09:24:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:14:36.605 09:24:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:36.605 09:24:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:36.605 09:24:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:36.605 09:24:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:36.605 09:24:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:36.605 09:24:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:36.605 09:24:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:36.605 09:24:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:36.605 09:24:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:36.605 09:24:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:36.605 09:24:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:36.605 09:24:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:36.883 09:24:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:36.883 09:24:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:36.883 09:24:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:36.883 09:24:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:36.883 09:24:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:36.883 09:24:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:36.883 09:24:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:36.883 09:24:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:14:36.883 09:24:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:38.784 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:38.784 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:38.784 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:38.784 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:38.784 09:24:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:38.784 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:38.784 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:38.784 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:38.784 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:38.784 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:38.784 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:38.784 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:38.784 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:38.784 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:14:38.784 00:14:38.784 --- 10.0.0.2 ping statistics --- 00:14:38.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.784 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:14:38.784 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:38.784 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:38.784 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:14:38.784 00:14:38.784 --- 10.0.0.1 ping statistics --- 00:14:38.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.784 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:14:38.784 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:38.784 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:14:38.784 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:38.784 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:38.784 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:38.784 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:38.784 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:38.784 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:38.784 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:38.784 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:38.784 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:38.784 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:38.784 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:38.784 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=693498 00:14:38.784 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:38.784 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 693498 00:14:38.784 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 693498 ']' 00:14:38.784 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.784 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:38.784 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.784 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:38.784 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:38.784 [2024-07-14 09:24:23.181597] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:38.784 [2024-07-14 09:24:23.181684] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.785 EAL: No free 2048 kB hugepages reported on node 1 00:14:39.042 [2024-07-14 09:24:23.248222] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:39.042 [2024-07-14 09:24:23.334039] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:39.042 [2024-07-14 09:24:23.334109] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:39.042 [2024-07-14 09:24:23.334122] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:39.042 [2024-07-14 09:24:23.334134] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:39.042 [2024-07-14 09:24:23.334144] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:39.042 [2024-07-14 09:24:23.334205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:39.042 [2024-07-14 09:24:23.334210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.042 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:39.042 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:14:39.042 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:39.042 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:39.042 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:39.042 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:39.042 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:39.042 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.042 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:39.042 [2024-07-14 09:24:23.480759] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:39.042 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.042 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:39.042 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.042 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:39.042 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.042 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:39.042 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.042 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:39.299 [2024-07-14 09:24:23.497089] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:39.299 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.299 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:39.299 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.299 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:39.299 NULL1 00:14:39.299 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.299 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:39.299 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.299 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:39.299 Delay0 00:14:39.299 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.299 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:39.299 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.299 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:39.299 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.299 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=693640 00:14:39.299 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:39.299 09:24:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:39.299 EAL: No free 2048 kB hugepages reported on node 1 00:14:39.299 [2024-07-14 09:24:23.571632] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:41.217 09:24:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:41.217 09:24:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.217 09:24:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:41.499 Read completed with error (sct=0, sc=8) 00:14:41.499 Read completed with error (sct=0, sc=8) 00:14:41.499 Write completed with error (sct=0, sc=8) 00:14:41.499 Read completed with error (sct=0, sc=8) 00:14:41.499 starting I/O failed: -6 00:14:41.499 Read completed with error (sct=0, sc=8) 00:14:41.499 Read completed with error (sct=0, sc=8) 00:14:41.499 Read completed with error (sct=0, sc=8) 00:14:41.499 Write completed with error (sct=0, sc=8) 00:14:41.499 starting I/O failed: -6 00:14:41.499 Write completed with error (sct=0, sc=8) 00:14:41.499 Read completed with error (sct=0, sc=8) 00:14:41.499 Read completed with error (sct=0, sc=8) 00:14:41.499 Read completed with error (sct=0, sc=8) 00:14:41.499 starting I/O failed: -6 00:14:41.499 Read completed with error (sct=0, sc=8) 00:14:41.499 Read completed with error (sct=0, sc=8) 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 starting I/O failed: -6 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 starting I/O failed: -6 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 starting I/O failed: -6 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 starting I/O failed: -6 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 starting I/O failed: -6 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 starting I/O failed: -6 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 starting I/O failed: -6 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 starting I/O failed: -6 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 starting I/O failed: -6 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 starting I/O failed: -6 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 [2024-07-14 09:24:25.703918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb2b800d2f0 is same with the state(5) to be set 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 starting I/O failed: -6 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 starting I/O failed: -6 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 starting I/O failed: -6 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 starting I/O failed: -6 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 starting I/O failed: -6 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 starting I/O failed: -6 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 starting I/O failed: -6 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 starting I/O failed: -6 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 starting I/O failed: -6 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 starting I/O failed: -6 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 starting I/O failed: -6 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 starting I/O failed: -6 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 starting I/O failed: -6 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 starting I/O failed: -6 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 starting I/O failed: -6 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 starting I/O failed: -6 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 starting I/O failed: -6 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 starting I/O failed: -6 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 starting I/O failed: -6 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 starting I/O failed: -6 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 starting I/O failed: -6 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 starting I/O failed: -6 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 starting I/O failed: -6 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 starting I/O failed: -6 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 starting I/O failed: -6 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 starting I/O failed: -6 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 starting I/O failed: -6 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 starting I/O failed: -6 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 starting I/O failed: -6 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 starting I/O failed: -6 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 Write completed with error (sct=0, sc=8) 00:14:41.500 starting I/O failed: -6 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 Read completed with error (sct=0, sc=8) 00:14:41.500 starting I/O failed: -6 00:14:41.501 Read completed with error (sct=0, sc=8) 00:14:41.501 Read completed with error (sct=0, sc=8) 00:14:41.501 starting I/O failed: -6 00:14:41.501 Read completed with error (sct=0, sc=8) 00:14:41.501 Read completed with error (sct=0, sc=8) 00:14:41.501 starting I/O failed: -6 00:14:41.501 Write completed with error (sct=0, sc=8) 00:14:41.501 Read completed with error (sct=0, sc=8) 00:14:41.501 starting I/O failed: -6 00:14:41.501 Read completed with error (sct=0, sc=8) 00:14:41.501 Read completed with error (sct=0, sc=8) 00:14:41.501 starting I/O failed: -6 00:14:41.501 Write completed with error (sct=0, sc=8) 00:14:41.501 Read completed with error (sct=0, sc=8) 00:14:41.501 starting I/O failed: -6 00:14:41.501 Write completed with error (sct=0, sc=8) 00:14:41.501 Read completed with error (sct=0, sc=8) 00:14:41.501 starting I/O failed: -6 00:14:41.501 Read completed with error (sct=0, sc=8) 00:14:41.501 Write completed with error (sct=0, sc=8) 00:14:41.501 starting I/O failed: -6 00:14:41.501 starting I/O failed: -6 00:14:41.501 starting I/O failed: -6 00:14:41.501 starting I/O failed: -6 00:14:41.501 starting I/O failed: -6 00:14:42.441 [2024-07-14 09:24:26.676335] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1094a30 is same with the state(5) to be set 00:14:42.441 Read completed with error (sct=0, sc=8) 00:14:42.441 Write completed with error (sct=0, sc=8) 00:14:42.441 Read completed with error (sct=0, sc=8) 00:14:42.441 Read completed with error (sct=0, sc=8) 00:14:42.441 Write completed with error (sct=0, sc=8) 00:14:42.441 Read completed with error (sct=0, sc=8) 00:14:42.441 Read completed with error (sct=0, sc=8) 00:14:42.441 Read completed with error (sct=0, sc=8) 00:14:42.441 Read completed with error (sct=0, sc=8) 00:14:42.441 Read completed with error (sct=0, sc=8) 00:14:42.441 Read completed with error (sct=0, sc=8) 00:14:42.441 Write completed with error (sct=0, sc=8) 00:14:42.441 Write completed with error (sct=0, sc=8) 00:14:42.441 Read completed with error (sct=0, sc=8) 00:14:42.441 Write completed with error (sct=0, sc=8) 00:14:42.441 Write completed with error (sct=0, sc=8) 00:14:42.441 Read completed with error (sct=0, sc=8) 00:14:42.441 Write completed with error (sct=0, sc=8) 00:14:42.441 Write completed with error (sct=0, sc=8) 00:14:42.441 Read completed with error (sct=0, sc=8) 00:14:42.441 Read completed with error (sct=0, sc=8) 00:14:42.441 Read completed with error (sct=0, sc=8) 00:14:42.441 Read completed with error (sct=0, sc=8) 00:14:42.441 Read completed with error (sct=0, sc=8) 00:14:42.441 Write completed with error (sct=0, sc=8) 00:14:42.441 [2024-07-14 09:24:26.705967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb2b800d600 is same with the state(5) to be set 00:14:42.441 Write completed with error (sct=0, sc=8) 00:14:42.441 Read completed with error (sct=0, sc=8) 00:14:42.441 Read completed with error (sct=0, sc=8) 00:14:42.441 Write completed with error (sct=0, sc=8) 00:14:42.441 Write completed with error (sct=0, sc=8) 00:14:42.441 Write completed with error (sct=0, sc=8) 00:14:42.441 Read completed with error (sct=0, sc=8) 00:14:42.441 Read completed with error (sct=0, sc=8) 00:14:42.441 Write completed with error (sct=0, sc=8) 00:14:42.441 Read completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Write completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Write completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 [2024-07-14 09:24:26.706163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb2b800cfe0 is same with the state(5) to be set 00:14:42.442 Write completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Write completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Write completed with error (sct=0, sc=8) 00:14:42.442 Write completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Write completed with error (sct=0, sc=8) 00:14:42.442 Write completed with error (sct=0, sc=8) 00:14:42.442 Write completed with error (sct=0, sc=8) 00:14:42.442 Write completed with error (sct=0, sc=8) 00:14:42.442 Write completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Write completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Write completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Write completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Write completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 [2024-07-14 09:24:26.706436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1087450 is same with the state(5) to be set 00:14:42.442 Write completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Write completed with error (sct=0, sc=8) 00:14:42.442 Write completed with error (sct=0, sc=8) 00:14:42.442 Write completed with error (sct=0, sc=8) 00:14:42.442 Write completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Write completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Write completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Write completed with error (sct=0, sc=8) 00:14:42.442 Write completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Write completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Write completed with error (sct=0, sc=8) 00:14:42.442 Write completed with error (sct=0, sc=8) 00:14:42.442 Write completed with error (sct=0, sc=8) 00:14:42.442 Write completed with error (sct=0, sc=8) 00:14:42.442 Write completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Write completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 Read completed with error (sct=0, sc=8) 00:14:42.442 [2024-07-14 09:24:26.706996] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1086e30 is same with the state(5) to be set 00:14:42.442 Initializing NVMe Controllers 00:14:42.442 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:42.442 Controller IO queue size 128, less than required. 00:14:42.442 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:42.442 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:42.442 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:42.442 Initialization complete. Launching workers. 00:14:42.442 ======================================================== 00:14:42.442 Latency(us) 00:14:42.442 Device Information : IOPS MiB/s Average min max 00:14:42.442 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 187.52 0.09 902132.79 708.23 1014135.11 00:14:42.442 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 171.65 0.08 890008.52 563.40 1012922.34 00:14:42.442 ======================================================== 00:14:42.442 Total : 359.17 0.18 896338.59 563.40 1014135.11 00:14:42.442 00:14:42.442 [2024-07-14 09:24:26.708026] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1094a30 (9): Bad file descriptor 00:14:42.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:42.442 09:24:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.442 09:24:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:14:42.442 09:24:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 693640 00:14:42.442 09:24:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:43.007 09:24:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:43.007 09:24:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 693640 00:14:43.007 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (693640) - No such process 00:14:43.007 09:24:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 693640 00:14:43.007 09:24:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:14:43.007 09:24:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 693640 00:14:43.007 09:24:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:14:43.007 09:24:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:43.007 09:24:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:14:43.007 09:24:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:43.007 09:24:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 693640 00:14:43.007 09:24:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:14:43.007 09:24:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:43.007 09:24:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:43.007 09:24:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:43.007 09:24:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:43.007 09:24:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.007 09:24:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:43.007 09:24:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.007 09:24:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:43.007 09:24:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.007 09:24:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:43.008 [2024-07-14 09:24:27.226846] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:43.008 09:24:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.008 09:24:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:43.008 09:24:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.008 09:24:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:43.008 09:24:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.008 09:24:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=694042 00:14:43.008 09:24:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:43.008 09:24:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:14:43.008 09:24:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 694042 00:14:43.008 09:24:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:43.008 EAL: No free 2048 kB hugepages reported on node 1 00:14:43.008 [2024-07-14 09:24:27.283648] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:43.571 09:24:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:43.571 09:24:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 694042 00:14:43.571 09:24:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:43.829 09:24:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:43.829 09:24:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 694042 00:14:43.829 09:24:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:44.394 09:24:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:44.394 09:24:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 694042 00:14:44.394 09:24:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:44.958 09:24:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:44.958 09:24:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 694042 00:14:44.958 09:24:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:45.523 09:24:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:45.523 09:24:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 694042 00:14:45.523 09:24:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:46.088 09:24:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:46.088 09:24:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 694042 00:14:46.088 09:24:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:46.088 Initializing NVMe Controllers 00:14:46.088 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:46.088 Controller IO queue size 128, less than required. 00:14:46.088 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:46.088 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:46.088 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:46.088 Initialization complete. Launching workers. 00:14:46.088 ======================================================== 00:14:46.088 Latency(us) 00:14:46.088 Device Information : IOPS MiB/s Average min max 00:14:46.089 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004109.65 1000255.86 1043135.77 00:14:46.089 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005309.35 1000236.70 1011339.80 00:14:46.089 ======================================================== 00:14:46.089 Total : 256.00 0.12 1004709.50 1000236.70 1043135.77 00:14:46.089 00:14:46.347 09:24:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:46.347 09:24:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 694042 00:14:46.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (694042) - No such process 00:14:46.347 09:24:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 694042 00:14:46.347 09:24:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:46.347 09:24:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:46.347 09:24:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:46.347 09:24:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:14:46.347 09:24:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:46.347 09:24:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:14:46.347 09:24:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:46.347 09:24:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:46.347 rmmod nvme_tcp 00:14:46.347 rmmod nvme_fabrics 00:14:46.347 rmmod nvme_keyring 00:14:46.606 09:24:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:46.606 09:24:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:14:46.606 09:24:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:14:46.606 09:24:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 693498 ']' 00:14:46.606 09:24:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 693498 00:14:46.606 09:24:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 693498 ']' 00:14:46.606 09:24:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 693498 00:14:46.606 09:24:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:14:46.606 09:24:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:46.606 09:24:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 693498 00:14:46.606 09:24:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:46.606 09:24:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:46.606 09:24:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 693498' 00:14:46.606 killing process with pid 693498 00:14:46.606 09:24:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 693498 00:14:46.606 09:24:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 693498 00:14:46.865 09:24:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:46.865 09:24:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:46.865 09:24:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:46.865 09:24:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:46.865 09:24:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:46.865 09:24:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:46.865 09:24:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:46.865 09:24:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.766 09:24:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:48.766 00:14:48.766 real 0m12.134s 00:14:48.766 user 0m27.513s 00:14:48.766 sys 0m2.906s 00:14:48.766 09:24:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:48.766 09:24:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:48.766 ************************************ 00:14:48.766 END TEST nvmf_delete_subsystem 00:14:48.766 ************************************ 00:14:48.766 09:24:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:48.766 09:24:33 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:48.766 09:24:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:48.766 09:24:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:48.767 09:24:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:48.767 ************************************ 00:14:48.767 START TEST nvmf_ns_masking 00:14:48.767 ************************************ 00:14:48.767 09:24:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:48.767 * Looking for test storage... 00:14:48.767 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:48.767 09:24:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:48.767 09:24:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:48.767 09:24:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:48.767 09:24:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:48.767 09:24:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:48.767 09:24:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:48.767 09:24:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:48.767 09:24:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:48.767 09:24:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:48.767 09:24:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:48.767 09:24:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:48.767 09:24:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:48.767 09:24:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:48.767 09:24:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:48.767 09:24:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:48.767 09:24:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:48.767 09:24:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:48.767 09:24:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:48.767 09:24:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:49.025 09:24:33 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:49.025 09:24:33 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:49.025 09:24:33 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:49.025 09:24:33 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.025 09:24:33 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.025 09:24:33 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.025 09:24:33 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:49.025 09:24:33 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.025 09:24:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:14:49.025 09:24:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:49.025 09:24:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:49.025 09:24:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:49.025 09:24:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:49.025 09:24:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:49.025 09:24:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:49.025 09:24:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:49.025 09:24:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:49.025 09:24:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:49.025 09:24:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:49.025 09:24:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:49.025 09:24:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:49.025 09:24:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=cdebf8bc-d040-47f6-843d-770bdbeaa791 00:14:49.025 09:24:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:49.025 09:24:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=1f3920ec-6e46-49b5-a5e6-95a62ff4d6d2 00:14:49.025 09:24:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:49.025 09:24:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:49.025 09:24:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:49.025 09:24:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:49.025 09:24:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=349b8794-d2dc-4d5f-98a0-9e627b95cc49 00:14:49.025 09:24:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:49.025 09:24:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:49.025 09:24:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:49.025 09:24:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:49.025 09:24:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:49.025 09:24:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:49.025 09:24:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:49.025 09:24:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:49.025 09:24:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:49.025 09:24:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:49.025 09:24:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:49.025 09:24:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:14:49.025 09:24:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:50.951 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:50.951 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:50.951 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:50.951 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:50.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:50.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:14:50.951 00:14:50.951 --- 10.0.0.2 ping statistics --- 00:14:50.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:50.951 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:14:50.951 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:50.952 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:50.952 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:14:50.952 00:14:50.952 --- 10.0.0.1 ping statistics --- 00:14:50.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:50.952 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:14:50.952 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:50.952 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:14:50.952 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:50.952 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:50.952 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:50.952 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:50.952 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:50.952 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:50.952 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:50.952 09:24:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:50.952 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:50.952 09:24:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:50.952 09:24:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:50.952 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=696382 00:14:50.952 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:50.952 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 696382 00:14:50.952 09:24:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 696382 ']' 00:14:50.952 09:24:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.952 09:24:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:50.952 09:24:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.952 09:24:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:50.952 09:24:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:50.952 [2024-07-14 09:24:35.386884] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:50.952 [2024-07-14 09:24:35.386970] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:51.210 EAL: No free 2048 kB hugepages reported on node 1 00:14:51.210 [2024-07-14 09:24:35.457585] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.210 [2024-07-14 09:24:35.543487] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:51.210 [2024-07-14 09:24:35.543544] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:51.210 [2024-07-14 09:24:35.543558] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:51.210 [2024-07-14 09:24:35.543568] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:51.210 [2024-07-14 09:24:35.543578] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:51.210 [2024-07-14 09:24:35.543603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.210 09:24:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:51.210 09:24:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:14:51.210 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:51.210 09:24:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:51.210 09:24:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:51.469 09:24:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:51.469 09:24:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:51.469 [2024-07-14 09:24:35.917037] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:51.727 09:24:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:51.727 09:24:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:51.727 09:24:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:51.985 Malloc1 00:14:51.985 09:24:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:52.243 Malloc2 00:14:52.243 09:24:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:52.500 09:24:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:52.758 09:24:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:53.016 [2024-07-14 09:24:37.292260] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:53.016 09:24:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:53.016 09:24:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 349b8794-d2dc-4d5f-98a0-9e627b95cc49 -a 10.0.0.2 -s 4420 -i 4 00:14:53.016 09:24:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:53.016 09:24:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:53.016 09:24:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:53.016 09:24:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:53.016 09:24:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:55.543 09:24:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:55.543 09:24:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:55.543 09:24:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:55.543 09:24:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:55.543 09:24:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:55.543 09:24:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:55.543 09:24:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:55.543 09:24:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:55.543 09:24:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:55.543 09:24:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:55.543 09:24:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:55.543 09:24:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:55.543 09:24:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:55.543 [ 0]:0x1 00:14:55.543 09:24:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:55.543 09:24:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:55.543 09:24:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=346d6b1e8c334817bc55fee3c95aa25e 00:14:55.543 09:24:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 346d6b1e8c334817bc55fee3c95aa25e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:55.543 09:24:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:55.543 09:24:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:55.543 09:24:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:55.543 09:24:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:55.543 [ 0]:0x1 00:14:55.543 09:24:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:55.543 09:24:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:55.543 09:24:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=346d6b1e8c334817bc55fee3c95aa25e 00:14:55.543 09:24:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 346d6b1e8c334817bc55fee3c95aa25e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:55.543 09:24:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:55.543 09:24:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:55.543 09:24:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:55.543 [ 1]:0x2 00:14:55.543 09:24:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:55.543 09:24:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:55.543 09:24:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7820a539bfe644fc93bb4d6a9fe96561 00:14:55.543 09:24:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7820a539bfe644fc93bb4d6a9fe96561 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:55.543 09:24:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:55.543 09:24:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:55.800 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.800 09:24:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:56.057 09:24:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:56.314 09:24:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:56.314 09:24:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 349b8794-d2dc-4d5f-98a0-9e627b95cc49 -a 10.0.0.2 -s 4420 -i 4 00:14:56.572 09:24:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:56.572 09:24:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:56.572 09:24:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:56.572 09:24:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:14:56.572 09:24:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:14:56.572 09:24:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:58.539 09:24:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:58.539 09:24:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:58.539 09:24:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:58.539 09:24:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:58.539 09:24:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:58.539 09:24:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:58.539 09:24:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:58.539 09:24:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:58.539 09:24:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:58.539 09:24:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:58.540 09:24:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:58.540 09:24:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:58.540 09:24:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:58.540 09:24:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:58.540 09:24:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:58.540 09:24:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:58.540 09:24:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:58.540 09:24:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:58.540 09:24:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:58.540 09:24:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:58.540 09:24:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:58.540 09:24:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:58.798 09:24:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:58.798 09:24:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:58.798 09:24:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:58.798 09:24:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:58.798 09:24:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:58.798 09:24:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:58.798 09:24:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:58.798 09:24:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:58.798 09:24:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:58.798 [ 0]:0x2 00:14:58.798 09:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:58.798 09:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:58.798 09:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7820a539bfe644fc93bb4d6a9fe96561 00:14:58.798 09:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7820a539bfe644fc93bb4d6a9fe96561 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:58.798 09:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:59.056 09:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:59.056 09:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:59.056 09:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:59.056 [ 0]:0x1 00:14:59.056 09:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:59.056 09:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:59.056 09:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=346d6b1e8c334817bc55fee3c95aa25e 00:14:59.056 09:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 346d6b1e8c334817bc55fee3c95aa25e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:59.056 09:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:59.056 09:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:59.056 09:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:59.056 [ 1]:0x2 00:14:59.056 09:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:59.056 09:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:59.056 09:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7820a539bfe644fc93bb4d6a9fe96561 00:14:59.056 09:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7820a539bfe644fc93bb4d6a9fe96561 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:59.056 09:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:59.313 09:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:59.313 09:24:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:59.313 09:24:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:59.313 09:24:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:59.313 09:24:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:59.313 09:24:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:59.313 09:24:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:59.313 09:24:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:59.313 09:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:59.313 09:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:59.313 09:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:59.313 09:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:59.313 09:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:59.313 09:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:59.313 09:24:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:59.313 09:24:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:59.313 09:24:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:59.313 09:24:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:59.313 09:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:59.313 09:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:59.313 09:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:59.313 [ 0]:0x2 00:14:59.313 09:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:59.313 09:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:59.570 09:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7820a539bfe644fc93bb4d6a9fe96561 00:14:59.570 09:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7820a539bfe644fc93bb4d6a9fe96561 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:59.570 09:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:59.570 09:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:59.570 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.570 09:24:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:59.827 09:24:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:59.827 09:24:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 349b8794-d2dc-4d5f-98a0-9e627b95cc49 -a 10.0.0.2 -s 4420 -i 4 00:14:59.827 09:24:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:59.827 09:24:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:59.827 09:24:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:59.827 09:24:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:59.827 09:24:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:59.827 09:24:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:02.353 09:24:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:02.353 09:24:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:02.353 09:24:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:02.353 09:24:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:15:02.353 09:24:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:02.353 09:24:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:02.353 09:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:02.353 09:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:02.353 09:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:02.353 09:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:02.353 09:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:15:02.353 09:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:02.353 09:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:02.353 [ 0]:0x1 00:15:02.353 09:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:02.353 09:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:02.353 09:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=346d6b1e8c334817bc55fee3c95aa25e 00:15:02.353 09:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 346d6b1e8c334817bc55fee3c95aa25e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:02.353 09:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:15:02.353 09:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:02.353 09:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:02.353 [ 1]:0x2 00:15:02.353 09:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:02.353 09:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:02.353 09:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7820a539bfe644fc93bb4d6a9fe96561 00:15:02.353 09:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7820a539bfe644fc93bb4d6a9fe96561 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:02.353 09:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:02.611 09:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:15:02.611 09:24:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:02.611 09:24:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:02.611 09:24:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:02.611 09:24:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:02.611 09:24:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:02.611 09:24:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:02.611 09:24:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:02.611 09:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:02.611 09:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:02.611 09:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:02.611 09:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:02.611 09:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:02.611 09:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:02.611 09:24:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:02.611 09:24:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:02.611 09:24:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:02.611 09:24:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:02.611 09:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:15:02.611 09:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:02.611 09:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:02.611 [ 0]:0x2 00:15:02.611 09:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:02.611 09:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:02.611 09:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7820a539bfe644fc93bb4d6a9fe96561 00:15:02.611 09:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7820a539bfe644fc93bb4d6a9fe96561 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:02.611 09:24:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:02.611 09:24:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:02.611 09:24:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:02.611 09:24:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:02.611 09:24:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:02.611 09:24:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:02.611 09:24:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:02.611 09:24:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:02.611 09:24:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:02.611 09:24:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:02.611 09:24:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:02.612 09:24:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:02.869 [2024-07-14 09:24:47.182082] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:02.870 request: 00:15:02.870 { 00:15:02.870 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:02.870 "nsid": 2, 00:15:02.870 "host": "nqn.2016-06.io.spdk:host1", 00:15:02.870 "method": "nvmf_ns_remove_host", 00:15:02.870 "req_id": 1 00:15:02.870 } 00:15:02.870 Got JSON-RPC error response 00:15:02.870 response: 00:15:02.870 { 00:15:02.870 "code": -32602, 00:15:02.870 "message": "Invalid parameters" 00:15:02.870 } 00:15:02.870 09:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:02.870 09:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:02.870 09:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:02.870 09:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:02.870 09:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:15:02.870 09:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:02.870 09:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:02.870 09:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:02.870 09:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:02.870 09:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:02.870 09:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:02.870 09:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:02.870 09:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:02.870 09:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:02.870 09:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:02.870 09:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:02.870 09:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:02.870 09:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:02.870 09:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:02.870 09:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:02.870 09:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:02.870 09:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:02.870 09:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:15:02.870 09:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:02.870 09:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:02.870 [ 0]:0x2 00:15:02.870 09:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:02.870 09:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:03.128 09:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7820a539bfe644fc93bb4d6a9fe96561 00:15:03.128 09:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7820a539bfe644fc93bb4d6a9fe96561 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:03.128 09:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:15:03.128 09:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:03.128 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.128 09:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=698002 00:15:03.128 09:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:15:03.128 09:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:15:03.128 09:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 698002 /var/tmp/host.sock 00:15:03.128 09:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 698002 ']' 00:15:03.128 09:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:15:03.128 09:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:03.128 09:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:03.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:03.128 09:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:03.128 09:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:03.128 [2024-07-14 09:24:47.526801] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:15:03.128 [2024-07-14 09:24:47.526903] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid698002 ] 00:15:03.128 EAL: No free 2048 kB hugepages reported on node 1 00:15:03.387 [2024-07-14 09:24:47.594700] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.387 [2024-07-14 09:24:47.690828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:03.645 09:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:03.645 09:24:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:15:03.645 09:24:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:03.903 09:24:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:04.161 09:24:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid cdebf8bc-d040-47f6-843d-770bdbeaa791 00:15:04.161 09:24:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:15:04.161 09:24:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g CDEBF8BCD04047F6843D770BDBEAA791 -i 00:15:04.419 09:24:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 1f3920ec-6e46-49b5-a5e6-95a62ff4d6d2 00:15:04.419 09:24:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:15:04.419 09:24:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 1F3920EC6E4649B5A5E695A62FF4D6D2 -i 00:15:04.677 09:24:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:04.935 09:24:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:15:05.193 09:24:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:05.193 09:24:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:05.451 nvme0n1 00:15:05.709 09:24:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:05.709 09:24:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:05.967 nvme1n2 00:15:05.967 09:24:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:15:05.967 09:24:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:05.967 09:24:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:15:05.967 09:24:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:15:05.967 09:24:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:15:06.225 09:24:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:15:06.225 09:24:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:15:06.225 09:24:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:15:06.225 09:24:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:15:06.481 09:24:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ cdebf8bc-d040-47f6-843d-770bdbeaa791 == \c\d\e\b\f\8\b\c\-\d\0\4\0\-\4\7\f\6\-\8\4\3\d\-\7\7\0\b\d\b\e\a\a\7\9\1 ]] 00:15:06.481 09:24:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:15:06.481 09:24:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:15:06.481 09:24:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:15:06.739 09:24:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 1f3920ec-6e46-49b5-a5e6-95a62ff4d6d2 == \1\f\3\9\2\0\e\c\-\6\e\4\6\-\4\9\b\5\-\a\5\e\6\-\9\5\a\6\2\f\f\4\d\6\d\2 ]] 00:15:06.739 09:24:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 698002 00:15:06.739 09:24:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 698002 ']' 00:15:06.739 09:24:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 698002 00:15:06.739 09:24:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:15:06.739 09:24:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:06.739 09:24:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 698002 00:15:06.739 09:24:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:06.739 09:24:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:06.739 09:24:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 698002' 00:15:06.739 killing process with pid 698002 00:15:06.739 09:24:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 698002 00:15:06.739 09:24:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 698002 00:15:07.304 09:24:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:07.563 09:24:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:15:07.563 09:24:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:15:07.563 09:24:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:07.563 09:24:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:15:07.563 09:24:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:07.563 09:24:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:15:07.563 09:24:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:07.563 09:24:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:07.563 rmmod nvme_tcp 00:15:07.563 rmmod nvme_fabrics 00:15:07.563 rmmod nvme_keyring 00:15:07.563 09:24:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:07.563 09:24:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:15:07.563 09:24:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:15:07.563 09:24:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 696382 ']' 00:15:07.563 09:24:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 696382 00:15:07.563 09:24:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 696382 ']' 00:15:07.563 09:24:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 696382 00:15:07.563 09:24:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:15:07.563 09:24:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:07.563 09:24:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 696382 00:15:07.563 09:24:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:07.563 09:24:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:07.563 09:24:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 696382' 00:15:07.563 killing process with pid 696382 00:15:07.563 09:24:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 696382 00:15:07.563 09:24:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 696382 00:15:07.822 09:24:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:07.822 09:24:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:07.822 09:24:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:07.822 09:24:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:07.822 09:24:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:07.822 09:24:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:07.822 09:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:07.822 09:24:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:10.356 09:24:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:10.356 00:15:10.356 real 0m21.116s 00:15:10.356 user 0m27.538s 00:15:10.356 sys 0m4.083s 00:15:10.356 09:24:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:10.356 09:24:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:10.356 ************************************ 00:15:10.356 END TEST nvmf_ns_masking 00:15:10.356 ************************************ 00:15:10.356 09:24:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:10.356 09:24:54 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:15:10.356 09:24:54 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:10.356 09:24:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:10.356 09:24:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:10.356 09:24:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:10.356 ************************************ 00:15:10.356 START TEST nvmf_nvme_cli 00:15:10.356 ************************************ 00:15:10.356 09:24:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:10.356 * Looking for test storage... 00:15:10.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:10.356 09:24:54 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:10.356 09:24:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:10.356 09:24:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:10.356 09:24:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:10.356 09:24:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:10.356 09:24:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:10.356 09:24:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:10.356 09:24:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:10.356 09:24:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:10.356 09:24:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:10.356 09:24:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:10.356 09:24:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:10.356 09:24:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:10.356 09:24:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:10.356 09:24:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:10.356 09:24:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:10.356 09:24:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:10.356 09:24:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:10.356 09:24:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:10.356 09:24:54 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:10.356 09:24:54 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:10.356 09:24:54 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:10.356 09:24:54 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.356 09:24:54 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.356 09:24:54 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.356 09:24:54 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:10.356 09:24:54 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.356 09:24:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:15:10.356 09:24:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:10.356 09:24:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:10.356 09:24:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:10.356 09:24:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:10.356 09:24:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:10.356 09:24:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:10.356 09:24:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:10.356 09:24:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:10.356 09:24:54 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:10.356 09:24:54 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:10.356 09:24:54 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:10.356 09:24:54 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:10.356 09:24:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:10.356 09:24:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:10.356 09:24:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:10.356 09:24:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:10.356 09:24:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:10.356 09:24:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:10.357 09:24:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:10.357 09:24:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:10.357 09:24:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:10.357 09:24:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:10.357 09:24:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:15:10.357 09:24:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:12.261 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:12.261 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:15:12.261 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:12.261 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:12.261 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:12.261 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:12.261 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:12.261 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:15:12.261 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:12.261 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:15:12.261 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:15:12.261 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:15:12.261 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:15:12.261 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:15:12.261 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:15:12.261 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:12.261 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:12.261 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:12.261 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:12.261 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:12.261 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:12.261 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:12.261 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:12.261 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:12.261 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:12.261 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:12.261 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:12.261 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:12.261 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:12.261 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:12.261 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:12.261 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:12.261 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:12.261 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:12.261 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:12.261 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:12.261 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:12.261 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:12.261 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:12.261 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:12.261 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:12.261 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:12.261 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:12.261 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:12.261 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:12.261 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:12.261 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:12.261 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:12.261 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:12.262 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:12.262 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:12.262 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:12.262 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:15:12.262 00:15:12.262 --- 10.0.0.2 ping statistics --- 00:15:12.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.262 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:12.262 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:12.262 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:15:12.262 00:15:12.262 --- 10.0.0.1 ping statistics --- 00:15:12.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.262 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=700494 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 700494 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 700494 ']' 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:12.262 09:24:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:12.262 [2024-07-14 09:24:56.611315] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:15:12.262 [2024-07-14 09:24:56.611412] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:12.262 EAL: No free 2048 kB hugepages reported on node 1 00:15:12.262 [2024-07-14 09:24:56.683862] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:12.520 [2024-07-14 09:24:56.779653] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:12.520 [2024-07-14 09:24:56.779706] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:12.520 [2024-07-14 09:24:56.779735] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:12.520 [2024-07-14 09:24:56.779747] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:12.520 [2024-07-14 09:24:56.779757] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:12.520 [2024-07-14 09:24:56.779849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:12.520 [2024-07-14 09:24:56.779930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:12.520 [2024-07-14 09:24:56.779953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:12.520 [2024-07-14 09:24:56.779956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.520 09:24:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:12.520 09:24:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:15:12.520 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:12.520 09:24:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:12.520 09:24:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:12.521 09:24:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:12.521 09:24:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:12.521 09:24:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.521 09:24:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:12.521 [2024-07-14 09:24:56.932754] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:12.521 09:24:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.521 09:24:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:12.521 09:24:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.521 09:24:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:12.521 Malloc0 00:15:12.521 09:24:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.521 09:24:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:12.521 09:24:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.521 09:24:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:12.779 Malloc1 00:15:12.779 09:24:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.779 09:24:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:12.779 09:24:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.779 09:24:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:12.779 09:24:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.779 09:24:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:12.779 09:24:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.779 09:24:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:12.779 09:24:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.779 09:24:57 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:12.779 09:24:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.779 09:24:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:12.779 09:24:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.779 09:24:57 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:12.779 09:24:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.779 09:24:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:12.779 [2024-07-14 09:24:57.019028] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:12.779 09:24:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.779 09:24:57 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:12.779 09:24:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.779 09:24:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:12.779 09:24:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.779 09:24:57 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:15:12.779 00:15:12.779 Discovery Log Number of Records 2, Generation counter 2 00:15:12.779 =====Discovery Log Entry 0====== 00:15:12.779 trtype: tcp 00:15:12.779 adrfam: ipv4 00:15:12.779 subtype: current discovery subsystem 00:15:12.779 treq: not required 00:15:12.779 portid: 0 00:15:12.779 trsvcid: 4420 00:15:12.779 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:12.779 traddr: 10.0.0.2 00:15:12.779 eflags: explicit discovery connections, duplicate discovery information 00:15:12.779 sectype: none 00:15:12.779 =====Discovery Log Entry 1====== 00:15:12.779 trtype: tcp 00:15:12.779 adrfam: ipv4 00:15:12.779 subtype: nvme subsystem 00:15:12.779 treq: not required 00:15:12.779 portid: 0 00:15:12.779 trsvcid: 4420 00:15:12.779 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:12.779 traddr: 10.0.0.2 00:15:12.779 eflags: none 00:15:12.779 sectype: none 00:15:12.779 09:24:57 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:12.779 09:24:57 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:12.779 09:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:12.779 09:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:12.779 09:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:12.779 09:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:12.779 09:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:12.779 09:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:12.779 09:24:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:12.779 09:24:57 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:12.779 09:24:57 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:13.732 09:24:57 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:13.732 09:24:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:15:13.732 09:24:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:13.732 09:24:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:15:13.732 09:24:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:15:13.732 09:24:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:15:15.629 /dev/nvme0n1 ]] 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:15.629 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:15.629 09:24:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:15.629 rmmod nvme_tcp 00:15:15.629 rmmod nvme_fabrics 00:15:15.629 rmmod nvme_keyring 00:15:15.629 09:25:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:15.629 09:25:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:15:15.629 09:25:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:15:15.629 09:25:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 700494 ']' 00:15:15.629 09:25:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 700494 00:15:15.629 09:25:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 700494 ']' 00:15:15.629 09:25:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 700494 00:15:15.629 09:25:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:15:15.629 09:25:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:15.629 09:25:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 700494 00:15:15.629 09:25:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:15.629 09:25:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:15.629 09:25:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 700494' 00:15:15.629 killing process with pid 700494 00:15:15.629 09:25:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 700494 00:15:15.629 09:25:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 700494 00:15:16.196 09:25:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:16.196 09:25:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:16.196 09:25:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:16.196 09:25:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:16.196 09:25:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:16.196 09:25:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.196 09:25:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:16.196 09:25:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:18.097 09:25:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:18.097 00:15:18.097 real 0m8.075s 00:15:18.097 user 0m14.620s 00:15:18.097 sys 0m2.238s 00:15:18.097 09:25:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:18.097 09:25:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:18.097 ************************************ 00:15:18.097 END TEST nvmf_nvme_cli 00:15:18.097 ************************************ 00:15:18.097 09:25:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:18.097 09:25:02 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:15:18.097 09:25:02 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:18.097 09:25:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:18.097 09:25:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:18.097 09:25:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:18.097 ************************************ 00:15:18.097 START TEST nvmf_vfio_user 00:15:18.097 ************************************ 00:15:18.097 09:25:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:18.097 * Looking for test storage... 00:15:18.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:18.097 09:25:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:18.097 09:25:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:18.097 09:25:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:18.097 09:25:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:18.097 09:25:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:18.097 09:25:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=701293 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 701293' 00:15:18.098 Process pid: 701293 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 701293 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 701293 ']' 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:18.098 09:25:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:18.356 [2024-07-14 09:25:02.584025] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:15:18.356 [2024-07-14 09:25:02.584104] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:18.356 EAL: No free 2048 kB hugepages reported on node 1 00:15:18.356 [2024-07-14 09:25:02.644614] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:18.356 [2024-07-14 09:25:02.732462] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:18.356 [2024-07-14 09:25:02.732512] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:18.357 [2024-07-14 09:25:02.732526] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:18.357 [2024-07-14 09:25:02.732536] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:18.357 [2024-07-14 09:25:02.732546] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:18.357 [2024-07-14 09:25:02.732632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:18.357 [2024-07-14 09:25:02.732654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:18.357 [2024-07-14 09:25:02.732712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:18.357 [2024-07-14 09:25:02.732715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.614 09:25:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:18.614 09:25:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:15:18.614 09:25:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:19.545 09:25:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:19.803 09:25:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:19.803 09:25:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:19.803 09:25:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:19.803 09:25:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:19.803 09:25:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:20.060 Malloc1 00:15:20.061 09:25:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:20.318 09:25:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:20.575 09:25:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:20.833 09:25:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:20.833 09:25:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:20.833 09:25:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:21.091 Malloc2 00:15:21.349 09:25:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:21.349 09:25:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:21.607 09:25:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:21.864 09:25:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:21.864 09:25:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:21.864 09:25:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:21.864 09:25:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:21.864 09:25:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:21.864 09:25:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:21.864 [2024-07-14 09:25:06.302506] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:15:21.864 [2024-07-14 09:25:06.302550] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid701792 ] 00:15:21.864 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.125 [2024-07-14 09:25:06.338193] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:22.125 [2024-07-14 09:25:06.346376] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:22.125 [2024-07-14 09:25:06.346404] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f735337b000 00:15:22.125 [2024-07-14 09:25:06.347367] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:22.125 [2024-07-14 09:25:06.348357] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:22.125 [2024-07-14 09:25:06.349361] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:22.125 [2024-07-14 09:25:06.350367] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:22.125 [2024-07-14 09:25:06.351373] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:22.125 [2024-07-14 09:25:06.352378] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:22.125 [2024-07-14 09:25:06.353384] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:22.125 [2024-07-14 09:25:06.354392] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:22.125 [2024-07-14 09:25:06.355396] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:22.125 [2024-07-14 09:25:06.355416] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f735212f000 00:15:22.125 [2024-07-14 09:25:06.356567] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:22.125 [2024-07-14 09:25:06.372588] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:22.125 [2024-07-14 09:25:06.372627] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:15:22.125 [2024-07-14 09:25:06.377533] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:22.125 [2024-07-14 09:25:06.377581] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:22.125 [2024-07-14 09:25:06.377666] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:15:22.125 [2024-07-14 09:25:06.377694] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:15:22.125 [2024-07-14 09:25:06.377704] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:15:22.125 [2024-07-14 09:25:06.378527] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:22.125 [2024-07-14 09:25:06.378546] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:15:22.125 [2024-07-14 09:25:06.378558] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:15:22.125 [2024-07-14 09:25:06.379529] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:22.125 [2024-07-14 09:25:06.379546] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:15:22.125 [2024-07-14 09:25:06.379559] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:15:22.125 [2024-07-14 09:25:06.380536] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:22.125 [2024-07-14 09:25:06.380554] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:22.125 [2024-07-14 09:25:06.381539] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:22.125 [2024-07-14 09:25:06.381558] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:15:22.125 [2024-07-14 09:25:06.381567] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:15:22.125 [2024-07-14 09:25:06.381579] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:22.125 [2024-07-14 09:25:06.381688] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:15:22.125 [2024-07-14 09:25:06.381696] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:22.125 [2024-07-14 09:25:06.381705] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:22.125 [2024-07-14 09:25:06.382543] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:22.125 [2024-07-14 09:25:06.383544] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:22.125 [2024-07-14 09:25:06.384553] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:22.125 [2024-07-14 09:25:06.385550] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:22.125 [2024-07-14 09:25:06.385661] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:22.125 [2024-07-14 09:25:06.386570] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:22.125 [2024-07-14 09:25:06.386587] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:22.125 [2024-07-14 09:25:06.386595] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:15:22.125 [2024-07-14 09:25:06.386618] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:15:22.125 [2024-07-14 09:25:06.386635] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:15:22.125 [2024-07-14 09:25:06.386659] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:22.125 [2024-07-14 09:25:06.386668] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:22.125 [2024-07-14 09:25:06.386686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:22.125 [2024-07-14 09:25:06.386740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:22.125 [2024-07-14 09:25:06.386755] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:15:22.125 [2024-07-14 09:25:06.386767] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:15:22.125 [2024-07-14 09:25:06.386774] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:15:22.125 [2024-07-14 09:25:06.386782] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:22.125 [2024-07-14 09:25:06.386790] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:15:22.125 [2024-07-14 09:25:06.386797] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:15:22.125 [2024-07-14 09:25:06.386805] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:15:22.125 [2024-07-14 09:25:06.386817] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:15:22.126 [2024-07-14 09:25:06.386832] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:22.126 [2024-07-14 09:25:06.386863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:22.126 [2024-07-14 09:25:06.386894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.126 [2024-07-14 09:25:06.386909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.126 [2024-07-14 09:25:06.386921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.126 [2024-07-14 09:25:06.386937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.126 [2024-07-14 09:25:06.386946] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:22.126 [2024-07-14 09:25:06.386962] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:22.126 [2024-07-14 09:25:06.386977] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:22.126 [2024-07-14 09:25:06.386991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:22.126 [2024-07-14 09:25:06.387002] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:15:22.126 [2024-07-14 09:25:06.387011] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:22.126 [2024-07-14 09:25:06.387022] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:15:22.126 [2024-07-14 09:25:06.387032] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:22.126 [2024-07-14 09:25:06.387045] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:22.126 [2024-07-14 09:25:06.387056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:22.126 [2024-07-14 09:25:06.387120] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:15:22.126 [2024-07-14 09:25:06.387134] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:22.126 [2024-07-14 09:25:06.387147] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:22.126 [2024-07-14 09:25:06.387156] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:22.126 [2024-07-14 09:25:06.387181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:22.126 [2024-07-14 09:25:06.387196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:22.126 [2024-07-14 09:25:06.387212] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:15:22.126 [2024-07-14 09:25:06.387242] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:15:22.126 [2024-07-14 09:25:06.387256] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:15:22.126 [2024-07-14 09:25:06.387267] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:22.126 [2024-07-14 09:25:06.387275] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:22.126 [2024-07-14 09:25:06.387284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:22.126 [2024-07-14 09:25:06.387308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:22.126 [2024-07-14 09:25:06.387329] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:22.126 [2024-07-14 09:25:06.387346] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:22.126 [2024-07-14 09:25:06.387358] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:22.126 [2024-07-14 09:25:06.387366] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:22.126 [2024-07-14 09:25:06.387375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:22.126 [2024-07-14 09:25:06.387388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:22.126 [2024-07-14 09:25:06.387401] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:22.126 [2024-07-14 09:25:06.387412] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:15:22.126 [2024-07-14 09:25:06.387425] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:15:22.126 [2024-07-14 09:25:06.387435] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:15:22.126 [2024-07-14 09:25:06.387443] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:22.126 [2024-07-14 09:25:06.387451] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:15:22.126 [2024-07-14 09:25:06.387459] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:15:22.126 [2024-07-14 09:25:06.387466] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:15:22.126 [2024-07-14 09:25:06.387474] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:15:22.126 [2024-07-14 09:25:06.387498] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:22.126 [2024-07-14 09:25:06.387516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:22.126 [2024-07-14 09:25:06.387534] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:22.126 [2024-07-14 09:25:06.387545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:22.126 [2024-07-14 09:25:06.387561] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:22.126 [2024-07-14 09:25:06.387575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:22.126 [2024-07-14 09:25:06.387590] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:22.126 [2024-07-14 09:25:06.387601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:22.126 [2024-07-14 09:25:06.387622] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:22.126 [2024-07-14 09:25:06.387632] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:22.126 [2024-07-14 09:25:06.387638] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:22.126 [2024-07-14 09:25:06.387644] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:22.126 [2024-07-14 09:25:06.387653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:22.126 [2024-07-14 09:25:06.387668] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:22.126 [2024-07-14 09:25:06.387676] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:22.126 [2024-07-14 09:25:06.387685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:22.126 [2024-07-14 09:25:06.387696] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:22.126 [2024-07-14 09:25:06.387703] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:22.126 [2024-07-14 09:25:06.387712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:22.126 [2024-07-14 09:25:06.387723] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:22.126 [2024-07-14 09:25:06.387731] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:22.126 [2024-07-14 09:25:06.387739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:22.126 [2024-07-14 09:25:06.387750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:22.126 [2024-07-14 09:25:06.387770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:22.126 [2024-07-14 09:25:06.387787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:22.126 [2024-07-14 09:25:06.387798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:22.126 ===================================================== 00:15:22.126 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:22.126 ===================================================== 00:15:22.126 Controller Capabilities/Features 00:15:22.126 ================================ 00:15:22.126 Vendor ID: 4e58 00:15:22.126 Subsystem Vendor ID: 4e58 00:15:22.126 Serial Number: SPDK1 00:15:22.126 Model Number: SPDK bdev Controller 00:15:22.126 Firmware Version: 24.09 00:15:22.126 Recommended Arb Burst: 6 00:15:22.126 IEEE OUI Identifier: 8d 6b 50 00:15:22.126 Multi-path I/O 00:15:22.126 May have multiple subsystem ports: Yes 00:15:22.126 May have multiple controllers: Yes 00:15:22.126 Associated with SR-IOV VF: No 00:15:22.126 Max Data Transfer Size: 131072 00:15:22.126 Max Number of Namespaces: 32 00:15:22.126 Max Number of I/O Queues: 127 00:15:22.126 NVMe Specification Version (VS): 1.3 00:15:22.126 NVMe Specification Version (Identify): 1.3 00:15:22.126 Maximum Queue Entries: 256 00:15:22.126 Contiguous Queues Required: Yes 00:15:22.126 Arbitration Mechanisms Supported 00:15:22.126 Weighted Round Robin: Not Supported 00:15:22.126 Vendor Specific: Not Supported 00:15:22.126 Reset Timeout: 15000 ms 00:15:22.126 Doorbell Stride: 4 bytes 00:15:22.126 NVM Subsystem Reset: Not Supported 00:15:22.126 Command Sets Supported 00:15:22.126 NVM Command Set: Supported 00:15:22.126 Boot Partition: Not Supported 00:15:22.126 Memory Page Size Minimum: 4096 bytes 00:15:22.126 Memory Page Size Maximum: 4096 bytes 00:15:22.126 Persistent Memory Region: Not Supported 00:15:22.126 Optional Asynchronous Events Supported 00:15:22.126 Namespace Attribute Notices: Supported 00:15:22.126 Firmware Activation Notices: Not Supported 00:15:22.127 ANA Change Notices: Not Supported 00:15:22.127 PLE Aggregate Log Change Notices: Not Supported 00:15:22.127 LBA Status Info Alert Notices: Not Supported 00:15:22.127 EGE Aggregate Log Change Notices: Not Supported 00:15:22.127 Normal NVM Subsystem Shutdown event: Not Supported 00:15:22.127 Zone Descriptor Change Notices: Not Supported 00:15:22.127 Discovery Log Change Notices: Not Supported 00:15:22.127 Controller Attributes 00:15:22.127 128-bit Host Identifier: Supported 00:15:22.127 Non-Operational Permissive Mode: Not Supported 00:15:22.127 NVM Sets: Not Supported 00:15:22.127 Read Recovery Levels: Not Supported 00:15:22.127 Endurance Groups: Not Supported 00:15:22.127 Predictable Latency Mode: Not Supported 00:15:22.127 Traffic Based Keep ALive: Not Supported 00:15:22.127 Namespace Granularity: Not Supported 00:15:22.127 SQ Associations: Not Supported 00:15:22.127 UUID List: Not Supported 00:15:22.127 Multi-Domain Subsystem: Not Supported 00:15:22.127 Fixed Capacity Management: Not Supported 00:15:22.127 Variable Capacity Management: Not Supported 00:15:22.127 Delete Endurance Group: Not Supported 00:15:22.127 Delete NVM Set: Not Supported 00:15:22.127 Extended LBA Formats Supported: Not Supported 00:15:22.127 Flexible Data Placement Supported: Not Supported 00:15:22.127 00:15:22.127 Controller Memory Buffer Support 00:15:22.127 ================================ 00:15:22.127 Supported: No 00:15:22.127 00:15:22.127 Persistent Memory Region Support 00:15:22.127 ================================ 00:15:22.127 Supported: No 00:15:22.127 00:15:22.127 Admin Command Set Attributes 00:15:22.127 ============================ 00:15:22.127 Security Send/Receive: Not Supported 00:15:22.127 Format NVM: Not Supported 00:15:22.127 Firmware Activate/Download: Not Supported 00:15:22.127 Namespace Management: Not Supported 00:15:22.127 Device Self-Test: Not Supported 00:15:22.127 Directives: Not Supported 00:15:22.127 NVMe-MI: Not Supported 00:15:22.127 Virtualization Management: Not Supported 00:15:22.127 Doorbell Buffer Config: Not Supported 00:15:22.127 Get LBA Status Capability: Not Supported 00:15:22.127 Command & Feature Lockdown Capability: Not Supported 00:15:22.127 Abort Command Limit: 4 00:15:22.127 Async Event Request Limit: 4 00:15:22.127 Number of Firmware Slots: N/A 00:15:22.127 Firmware Slot 1 Read-Only: N/A 00:15:22.127 Firmware Activation Without Reset: N/A 00:15:22.127 Multiple Update Detection Support: N/A 00:15:22.127 Firmware Update Granularity: No Information Provided 00:15:22.127 Per-Namespace SMART Log: No 00:15:22.127 Asymmetric Namespace Access Log Page: Not Supported 00:15:22.127 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:22.127 Command Effects Log Page: Supported 00:15:22.127 Get Log Page Extended Data: Supported 00:15:22.127 Telemetry Log Pages: Not Supported 00:15:22.127 Persistent Event Log Pages: Not Supported 00:15:22.127 Supported Log Pages Log Page: May Support 00:15:22.127 Commands Supported & Effects Log Page: Not Supported 00:15:22.127 Feature Identifiers & Effects Log Page:May Support 00:15:22.127 NVMe-MI Commands & Effects Log Page: May Support 00:15:22.127 Data Area 4 for Telemetry Log: Not Supported 00:15:22.127 Error Log Page Entries Supported: 128 00:15:22.127 Keep Alive: Supported 00:15:22.127 Keep Alive Granularity: 10000 ms 00:15:22.127 00:15:22.127 NVM Command Set Attributes 00:15:22.127 ========================== 00:15:22.127 Submission Queue Entry Size 00:15:22.127 Max: 64 00:15:22.127 Min: 64 00:15:22.127 Completion Queue Entry Size 00:15:22.127 Max: 16 00:15:22.127 Min: 16 00:15:22.127 Number of Namespaces: 32 00:15:22.127 Compare Command: Supported 00:15:22.127 Write Uncorrectable Command: Not Supported 00:15:22.127 Dataset Management Command: Supported 00:15:22.127 Write Zeroes Command: Supported 00:15:22.127 Set Features Save Field: Not Supported 00:15:22.127 Reservations: Not Supported 00:15:22.127 Timestamp: Not Supported 00:15:22.127 Copy: Supported 00:15:22.127 Volatile Write Cache: Present 00:15:22.127 Atomic Write Unit (Normal): 1 00:15:22.127 Atomic Write Unit (PFail): 1 00:15:22.127 Atomic Compare & Write Unit: 1 00:15:22.127 Fused Compare & Write: Supported 00:15:22.127 Scatter-Gather List 00:15:22.127 SGL Command Set: Supported (Dword aligned) 00:15:22.127 SGL Keyed: Not Supported 00:15:22.127 SGL Bit Bucket Descriptor: Not Supported 00:15:22.127 SGL Metadata Pointer: Not Supported 00:15:22.127 Oversized SGL: Not Supported 00:15:22.127 SGL Metadata Address: Not Supported 00:15:22.127 SGL Offset: Not Supported 00:15:22.127 Transport SGL Data Block: Not Supported 00:15:22.127 Replay Protected Memory Block: Not Supported 00:15:22.127 00:15:22.127 Firmware Slot Information 00:15:22.127 ========================= 00:15:22.127 Active slot: 1 00:15:22.127 Slot 1 Firmware Revision: 24.09 00:15:22.127 00:15:22.127 00:15:22.127 Commands Supported and Effects 00:15:22.127 ============================== 00:15:22.127 Admin Commands 00:15:22.127 -------------- 00:15:22.127 Get Log Page (02h): Supported 00:15:22.127 Identify (06h): Supported 00:15:22.127 Abort (08h): Supported 00:15:22.127 Set Features (09h): Supported 00:15:22.127 Get Features (0Ah): Supported 00:15:22.127 Asynchronous Event Request (0Ch): Supported 00:15:22.127 Keep Alive (18h): Supported 00:15:22.127 I/O Commands 00:15:22.127 ------------ 00:15:22.127 Flush (00h): Supported LBA-Change 00:15:22.127 Write (01h): Supported LBA-Change 00:15:22.127 Read (02h): Supported 00:15:22.127 Compare (05h): Supported 00:15:22.127 Write Zeroes (08h): Supported LBA-Change 00:15:22.127 Dataset Management (09h): Supported LBA-Change 00:15:22.127 Copy (19h): Supported LBA-Change 00:15:22.127 00:15:22.127 Error Log 00:15:22.127 ========= 00:15:22.127 00:15:22.127 Arbitration 00:15:22.127 =========== 00:15:22.127 Arbitration Burst: 1 00:15:22.127 00:15:22.127 Power Management 00:15:22.127 ================ 00:15:22.127 Number of Power States: 1 00:15:22.127 Current Power State: Power State #0 00:15:22.127 Power State #0: 00:15:22.127 Max Power: 0.00 W 00:15:22.127 Non-Operational State: Operational 00:15:22.127 Entry Latency: Not Reported 00:15:22.127 Exit Latency: Not Reported 00:15:22.127 Relative Read Throughput: 0 00:15:22.127 Relative Read Latency: 0 00:15:22.127 Relative Write Throughput: 0 00:15:22.127 Relative Write Latency: 0 00:15:22.127 Idle Power: Not Reported 00:15:22.127 Active Power: Not Reported 00:15:22.127 Non-Operational Permissive Mode: Not Supported 00:15:22.127 00:15:22.127 Health Information 00:15:22.127 ================== 00:15:22.127 Critical Warnings: 00:15:22.127 Available Spare Space: OK 00:15:22.127 Temperature: OK 00:15:22.127 Device Reliability: OK 00:15:22.127 Read Only: No 00:15:22.127 Volatile Memory Backup: OK 00:15:22.127 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:22.127 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:22.127 Available Spare: 0% 00:15:22.127 Available Sp[2024-07-14 09:25:06.387958] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:22.127 [2024-07-14 09:25:06.387975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:22.127 [2024-07-14 09:25:06.388020] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:15:22.127 [2024-07-14 09:25:06.388038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.127 [2024-07-14 09:25:06.388049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.127 [2024-07-14 09:25:06.388059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.127 [2024-07-14 09:25:06.388069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.127 [2024-07-14 09:25:06.391877] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:22.127 [2024-07-14 09:25:06.391898] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:22.127 [2024-07-14 09:25:06.392601] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:22.127 [2024-07-14 09:25:06.392689] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:15:22.127 [2024-07-14 09:25:06.392702] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:15:22.127 [2024-07-14 09:25:06.393616] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:22.127 [2024-07-14 09:25:06.393638] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:15:22.127 [2024-07-14 09:25:06.393694] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:22.127 [2024-07-14 09:25:06.395655] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:22.127 are Threshold: 0% 00:15:22.127 Life Percentage Used: 0% 00:15:22.127 Data Units Read: 0 00:15:22.127 Data Units Written: 0 00:15:22.127 Host Read Commands: 0 00:15:22.127 Host Write Commands: 0 00:15:22.127 Controller Busy Time: 0 minutes 00:15:22.127 Power Cycles: 0 00:15:22.127 Power On Hours: 0 hours 00:15:22.127 Unsafe Shutdowns: 0 00:15:22.127 Unrecoverable Media Errors: 0 00:15:22.127 Lifetime Error Log Entries: 0 00:15:22.127 Warning Temperature Time: 0 minutes 00:15:22.127 Critical Temperature Time: 0 minutes 00:15:22.127 00:15:22.127 Number of Queues 00:15:22.127 ================ 00:15:22.128 Number of I/O Submission Queues: 127 00:15:22.128 Number of I/O Completion Queues: 127 00:15:22.128 00:15:22.128 Active Namespaces 00:15:22.128 ================= 00:15:22.128 Namespace ID:1 00:15:22.128 Error Recovery Timeout: Unlimited 00:15:22.128 Command Set Identifier: NVM (00h) 00:15:22.128 Deallocate: Supported 00:15:22.128 Deallocated/Unwritten Error: Not Supported 00:15:22.128 Deallocated Read Value: Unknown 00:15:22.128 Deallocate in Write Zeroes: Not Supported 00:15:22.128 Deallocated Guard Field: 0xFFFF 00:15:22.128 Flush: Supported 00:15:22.128 Reservation: Supported 00:15:22.128 Namespace Sharing Capabilities: Multiple Controllers 00:15:22.128 Size (in LBAs): 131072 (0GiB) 00:15:22.128 Capacity (in LBAs): 131072 (0GiB) 00:15:22.128 Utilization (in LBAs): 131072 (0GiB) 00:15:22.128 NGUID: 72E04618E719425D9486CD4295638238 00:15:22.128 UUID: 72e04618-e719-425d-9486-cd4295638238 00:15:22.128 Thin Provisioning: Not Supported 00:15:22.128 Per-NS Atomic Units: Yes 00:15:22.128 Atomic Boundary Size (Normal): 0 00:15:22.128 Atomic Boundary Size (PFail): 0 00:15:22.128 Atomic Boundary Offset: 0 00:15:22.128 Maximum Single Source Range Length: 65535 00:15:22.128 Maximum Copy Length: 65535 00:15:22.128 Maximum Source Range Count: 1 00:15:22.128 NGUID/EUI64 Never Reused: No 00:15:22.128 Namespace Write Protected: No 00:15:22.128 Number of LBA Formats: 1 00:15:22.128 Current LBA Format: LBA Format #00 00:15:22.128 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:22.128 00:15:22.128 09:25:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:22.128 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.386 [2024-07-14 09:25:06.629715] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:27.652 Initializing NVMe Controllers 00:15:27.652 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:27.652 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:27.652 Initialization complete. Launching workers. 00:15:27.652 ======================================================== 00:15:27.652 Latency(us) 00:15:27.652 Device Information : IOPS MiB/s Average min max 00:15:27.652 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34104.75 133.22 3753.15 1180.33 9996.48 00:15:27.652 ======================================================== 00:15:27.652 Total : 34104.75 133.22 3753.15 1180.33 9996.48 00:15:27.652 00:15:27.652 [2024-07-14 09:25:11.652229] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:27.652 09:25:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:27.652 EAL: No free 2048 kB hugepages reported on node 1 00:15:27.652 [2024-07-14 09:25:11.883364] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:32.916 Initializing NVMe Controllers 00:15:32.916 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:32.916 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:32.916 Initialization complete. Launching workers. 00:15:32.916 ======================================================== 00:15:32.916 Latency(us) 00:15:32.916 Device Information : IOPS MiB/s Average min max 00:15:32.916 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16012.35 62.55 7999.04 7722.26 15978.86 00:15:32.916 ======================================================== 00:15:32.916 Total : 16012.35 62.55 7999.04 7722.26 15978.86 00:15:32.916 00:15:32.916 [2024-07-14 09:25:16.924510] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:32.916 09:25:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:32.916 EAL: No free 2048 kB hugepages reported on node 1 00:15:32.916 [2024-07-14 09:25:17.138540] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:38.216 [2024-07-14 09:25:22.200178] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:38.216 Initializing NVMe Controllers 00:15:38.216 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:38.216 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:38.216 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:38.216 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:38.216 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:38.216 Initialization complete. Launching workers. 00:15:38.216 Starting thread on core 2 00:15:38.216 Starting thread on core 3 00:15:38.216 Starting thread on core 1 00:15:38.216 09:25:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:38.216 EAL: No free 2048 kB hugepages reported on node 1 00:15:38.216 [2024-07-14 09:25:22.499324] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:41.496 [2024-07-14 09:25:25.567313] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:41.496 Initializing NVMe Controllers 00:15:41.496 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:41.496 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:41.496 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:41.496 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:41.496 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:41.496 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:41.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:41.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:41.496 Initialization complete. Launching workers. 00:15:41.496 Starting thread on core 1 with urgent priority queue 00:15:41.496 Starting thread on core 2 with urgent priority queue 00:15:41.496 Starting thread on core 3 with urgent priority queue 00:15:41.496 Starting thread on core 0 with urgent priority queue 00:15:41.496 SPDK bdev Controller (SPDK1 ) core 0: 5279.67 IO/s 18.94 secs/100000 ios 00:15:41.496 SPDK bdev Controller (SPDK1 ) core 1: 4884.67 IO/s 20.47 secs/100000 ios 00:15:41.496 SPDK bdev Controller (SPDK1 ) core 2: 5717.67 IO/s 17.49 secs/100000 ios 00:15:41.496 SPDK bdev Controller (SPDK1 ) core 3: 5794.00 IO/s 17.26 secs/100000 ios 00:15:41.496 ======================================================== 00:15:41.496 00:15:41.496 09:25:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:41.496 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.496 [2024-07-14 09:25:25.869441] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:41.496 Initializing NVMe Controllers 00:15:41.496 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:41.496 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:41.496 Namespace ID: 1 size: 0GB 00:15:41.496 Initialization complete. 00:15:41.496 INFO: using host memory buffer for IO 00:15:41.496 Hello world! 00:15:41.496 [2024-07-14 09:25:25.904025] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:41.754 09:25:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:41.754 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.754 [2024-07-14 09:25:26.200314] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:43.129 Initializing NVMe Controllers 00:15:43.129 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:43.129 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:43.129 Initialization complete. Launching workers. 00:15:43.129 submit (in ns) avg, min, max = 7237.3, 3515.6, 4016625.6 00:15:43.129 complete (in ns) avg, min, max = 27116.7, 2061.1, 4015265.6 00:15:43.129 00:15:43.129 Submit histogram 00:15:43.129 ================ 00:15:43.129 Range in us Cumulative Count 00:15:43.129 3.508 - 3.532: 0.4430% ( 59) 00:15:43.129 3.532 - 3.556: 1.5167% ( 143) 00:15:43.129 3.556 - 3.579: 4.2499% ( 364) 00:15:43.129 3.579 - 3.603: 9.4008% ( 686) 00:15:43.129 3.603 - 3.627: 17.9456% ( 1138) 00:15:43.129 3.627 - 3.650: 27.4215% ( 1262) 00:15:43.129 3.650 - 3.674: 36.4619% ( 1204) 00:15:43.129 3.674 - 3.698: 43.7153% ( 966) 00:15:43.129 3.698 - 3.721: 50.4881% ( 902) 00:15:43.129 3.721 - 3.745: 55.3086% ( 642) 00:15:43.129 3.745 - 3.769: 59.5585% ( 566) 00:15:43.129 3.769 - 3.793: 62.9899% ( 457) 00:15:43.129 3.793 - 3.816: 66.1961% ( 427) 00:15:43.129 3.816 - 3.840: 69.5825% ( 451) 00:15:43.129 3.840 - 3.864: 73.6147% ( 537) 00:15:43.129 3.864 - 3.887: 77.6017% ( 531) 00:15:43.129 3.887 - 3.911: 81.5663% ( 528) 00:15:43.129 3.911 - 3.935: 84.7875% ( 429) 00:15:43.129 3.935 - 3.959: 86.9650% ( 290) 00:15:43.129 3.959 - 3.982: 88.7746% ( 241) 00:15:43.129 3.982 - 4.006: 90.3139% ( 205) 00:15:43.129 4.006 - 4.030: 91.4777% ( 155) 00:15:43.129 4.030 - 4.053: 92.5965% ( 149) 00:15:43.129 4.053 - 4.077: 93.5801% ( 131) 00:15:43.129 4.077 - 4.101: 94.3235% ( 99) 00:15:43.129 4.101 - 4.124: 95.1344% ( 108) 00:15:43.129 4.124 - 4.148: 95.7276% ( 79) 00:15:43.129 4.148 - 4.172: 96.0655% ( 45) 00:15:43.129 4.172 - 4.196: 96.3883% ( 43) 00:15:43.129 4.196 - 4.219: 96.5911% ( 27) 00:15:43.129 4.219 - 4.243: 96.7413% ( 20) 00:15:43.129 4.243 - 4.267: 96.8464% ( 14) 00:15:43.129 4.267 - 4.290: 96.9590% ( 15) 00:15:43.129 4.290 - 4.314: 97.0191% ( 8) 00:15:43.129 4.314 - 4.338: 97.0716% ( 7) 00:15:43.129 4.338 - 4.361: 97.1467% ( 10) 00:15:43.129 4.361 - 4.385: 97.2143% ( 9) 00:15:43.129 4.385 - 4.409: 97.2744% ( 8) 00:15:43.129 4.409 - 4.433: 97.3044% ( 4) 00:15:43.129 4.433 - 4.456: 97.3119% ( 1) 00:15:43.129 4.456 - 4.480: 97.3419% ( 4) 00:15:43.129 4.504 - 4.527: 97.3495% ( 1) 00:15:43.129 4.527 - 4.551: 97.3645% ( 2) 00:15:43.129 4.551 - 4.575: 97.3795% ( 2) 00:15:43.129 4.575 - 4.599: 97.4170% ( 5) 00:15:43.129 4.599 - 4.622: 97.4471% ( 4) 00:15:43.129 4.622 - 4.646: 97.4621% ( 2) 00:15:43.129 4.646 - 4.670: 97.5146% ( 7) 00:15:43.129 4.670 - 4.693: 97.5672% ( 7) 00:15:43.129 4.693 - 4.717: 97.6198% ( 7) 00:15:43.129 4.717 - 4.741: 97.6573% ( 5) 00:15:43.129 4.741 - 4.764: 97.6798% ( 3) 00:15:43.129 4.764 - 4.788: 97.7699% ( 12) 00:15:43.129 4.788 - 4.812: 97.8075% ( 5) 00:15:43.129 4.812 - 4.836: 97.8450% ( 5) 00:15:43.129 4.836 - 4.859: 97.9051% ( 8) 00:15:43.129 4.859 - 4.883: 97.9426% ( 5) 00:15:43.129 4.883 - 4.907: 98.0027% ( 8) 00:15:43.129 4.907 - 4.930: 98.0327% ( 4) 00:15:43.129 4.930 - 4.954: 98.1078% ( 10) 00:15:43.129 4.954 - 4.978: 98.1153% ( 1) 00:15:43.129 4.978 - 5.001: 98.1379% ( 3) 00:15:43.129 5.025 - 5.049: 98.1754% ( 5) 00:15:43.129 5.049 - 5.073: 98.1979% ( 3) 00:15:43.129 5.073 - 5.096: 98.2129% ( 2) 00:15:43.129 5.096 - 5.120: 98.2280% ( 2) 00:15:43.129 5.120 - 5.144: 98.2355% ( 1) 00:15:43.129 5.144 - 5.167: 98.2430% ( 1) 00:15:43.129 5.191 - 5.215: 98.2505% ( 1) 00:15:43.129 5.215 - 5.239: 98.2730% ( 3) 00:15:43.129 5.239 - 5.262: 98.2805% ( 1) 00:15:43.129 5.262 - 5.286: 98.3030% ( 3) 00:15:43.129 5.286 - 5.310: 98.3106% ( 1) 00:15:43.129 5.310 - 5.333: 98.3181% ( 1) 00:15:43.129 5.333 - 5.357: 98.3481% ( 4) 00:15:43.129 5.357 - 5.381: 98.3556% ( 1) 00:15:43.129 5.381 - 5.404: 98.3631% ( 1) 00:15:43.129 5.428 - 5.452: 98.3706% ( 1) 00:15:43.129 5.452 - 5.476: 98.3781% ( 1) 00:15:43.129 5.476 - 5.499: 98.4007% ( 3) 00:15:43.129 5.547 - 5.570: 98.4157% ( 2) 00:15:43.129 5.570 - 5.594: 98.4232% ( 1) 00:15:43.129 5.594 - 5.618: 98.4382% ( 2) 00:15:43.129 5.641 - 5.665: 98.4532% ( 2) 00:15:43.129 5.713 - 5.736: 98.4757% ( 3) 00:15:43.129 5.760 - 5.784: 98.4833% ( 1) 00:15:43.129 5.784 - 5.807: 98.4908% ( 1) 00:15:43.129 5.855 - 5.879: 98.4983% ( 1) 00:15:43.129 5.902 - 5.926: 98.5133% ( 2) 00:15:43.129 5.926 - 5.950: 98.5208% ( 1) 00:15:43.129 6.044 - 6.068: 98.5283% ( 1) 00:15:43.129 6.068 - 6.116: 98.5358% ( 1) 00:15:43.129 6.163 - 6.210: 98.5433% ( 1) 00:15:43.129 6.258 - 6.305: 98.5508% ( 1) 00:15:43.129 6.779 - 6.827: 98.5583% ( 1) 00:15:43.129 6.874 - 6.921: 98.5659% ( 1) 00:15:43.129 6.921 - 6.969: 98.5734% ( 1) 00:15:43.129 7.016 - 7.064: 98.5809% ( 1) 00:15:43.129 7.111 - 7.159: 98.5959% ( 2) 00:15:43.129 7.206 - 7.253: 98.6034% ( 1) 00:15:43.129 7.301 - 7.348: 98.6184% ( 2) 00:15:43.129 7.348 - 7.396: 98.6259% ( 1) 00:15:43.129 7.396 - 7.443: 98.6409% ( 2) 00:15:43.129 7.443 - 7.490: 98.6484% ( 1) 00:15:43.129 7.490 - 7.538: 98.6635% ( 2) 00:15:43.129 7.538 - 7.585: 98.6710% ( 1) 00:15:43.129 7.633 - 7.680: 98.6785% ( 1) 00:15:43.129 7.727 - 7.775: 98.7010% ( 3) 00:15:43.129 7.775 - 7.822: 98.7085% ( 1) 00:15:43.129 7.822 - 7.870: 98.7160% ( 1) 00:15:43.129 7.870 - 7.917: 98.7235% ( 1) 00:15:43.129 7.964 - 8.012: 98.7461% ( 3) 00:15:43.129 8.012 - 8.059: 98.7611% ( 2) 00:15:43.129 8.107 - 8.154: 98.7761% ( 2) 00:15:43.129 8.201 - 8.249: 98.7836% ( 1) 00:15:43.129 8.249 - 8.296: 98.7986% ( 2) 00:15:43.129 8.296 - 8.344: 98.8061% ( 1) 00:15:43.129 8.391 - 8.439: 98.8136% ( 1) 00:15:43.129 8.486 - 8.533: 98.8211% ( 1) 00:15:43.129 8.628 - 8.676: 98.8287% ( 1) 00:15:43.129 8.770 - 8.818: 98.8362% ( 1) 00:15:43.129 9.007 - 9.055: 98.8437% ( 1) 00:15:43.129 9.055 - 9.102: 98.8512% ( 1) 00:15:43.129 9.244 - 9.292: 98.8587% ( 1) 00:15:43.129 9.292 - 9.339: 98.8662% ( 1) 00:15:43.129 9.387 - 9.434: 98.8737% ( 1) 00:15:43.129 9.481 - 9.529: 98.8812% ( 1) 00:15:43.129 9.719 - 9.766: 98.8887% ( 1) 00:15:43.129 9.908 - 9.956: 98.8962% ( 1) 00:15:43.129 10.287 - 10.335: 98.9037% ( 1) 00:15:43.129 10.382 - 10.430: 98.9112% ( 1) 00:15:43.129 11.283 - 11.330: 98.9188% ( 1) 00:15:43.129 11.330 - 11.378: 98.9263% ( 1) 00:15:43.129 11.425 - 11.473: 98.9338% ( 1) 00:15:43.129 11.899 - 11.947: 98.9413% ( 1) 00:15:43.129 12.041 - 12.089: 98.9488% ( 1) 00:15:43.130 12.610 - 12.705: 98.9563% ( 1) 00:15:43.130 13.179 - 13.274: 98.9638% ( 1) 00:15:43.130 13.369 - 13.464: 98.9713% ( 1) 00:15:43.130 13.464 - 13.559: 98.9788% ( 1) 00:15:43.130 13.653 - 13.748: 98.9863% ( 1) 00:15:43.130 13.938 - 14.033: 98.9938% ( 1) 00:15:43.130 14.222 - 14.317: 99.0014% ( 1) 00:15:43.130 17.067 - 17.161: 99.0089% ( 1) 00:15:43.130 17.161 - 17.256: 99.0164% ( 1) 00:15:43.130 17.256 - 17.351: 99.0239% ( 1) 00:15:43.130 17.351 - 17.446: 99.0389% ( 2) 00:15:43.130 17.446 - 17.541: 99.0539% ( 2) 00:15:43.130 17.541 - 17.636: 99.0915% ( 5) 00:15:43.130 17.636 - 17.730: 99.1365% ( 6) 00:15:43.130 17.730 - 17.825: 99.1741% ( 5) 00:15:43.130 17.825 - 17.920: 99.1966% ( 3) 00:15:43.130 17.920 - 18.015: 99.2867% ( 12) 00:15:43.130 18.015 - 18.110: 99.3543% ( 9) 00:15:43.130 18.110 - 18.204: 99.3993% ( 6) 00:15:43.130 18.204 - 18.299: 99.4293% ( 4) 00:15:43.130 18.299 - 18.394: 99.4744% ( 6) 00:15:43.130 18.394 - 18.489: 99.5645% ( 12) 00:15:43.130 18.489 - 18.584: 99.6396% ( 10) 00:15:43.130 18.584 - 18.679: 99.6997% ( 8) 00:15:43.130 18.679 - 18.773: 99.7447% ( 6) 00:15:43.130 18.773 - 18.868: 99.7672% ( 3) 00:15:43.130 18.868 - 18.963: 99.8048% ( 5) 00:15:43.130 18.963 - 19.058: 99.8123% ( 1) 00:15:43.130 19.058 - 19.153: 99.8498% ( 5) 00:15:43.130 19.153 - 19.247: 99.8648% ( 2) 00:15:43.130 19.342 - 19.437: 99.8799% ( 2) 00:15:43.130 19.437 - 19.532: 99.8874% ( 1) 00:15:43.130 22.187 - 22.281: 99.8949% ( 1) 00:15:43.130 22.281 - 22.376: 99.9024% ( 1) 00:15:43.130 26.169 - 26.359: 99.9099% ( 1) 00:15:43.130 28.824 - 29.013: 99.9174% ( 1) 00:15:43.130 3980.705 - 4004.978: 99.9775% ( 8) 00:15:43.130 4004.978 - 4029.250: 100.0000% ( 3) 00:15:43.130 00:15:43.130 Complete histogram 00:15:43.130 ================== 00:15:43.130 Range in us Cumulative Count 00:15:43.130 2.050 - 2.062: 0.0075% ( 1) 00:15:43.130 2.062 - 2.074: 15.3702% ( 2046) 00:15:43.130 2.074 - 2.086: 41.0122% ( 3415) 00:15:43.130 2.086 - 2.098: 43.5200% ( 334) 00:15:43.130 2.098 - 2.110: 54.7980% ( 1502) 00:15:43.130 2.110 - 2.121: 60.7899% ( 798) 00:15:43.130 2.121 - 2.133: 62.2841% ( 199) 00:15:43.130 2.133 - 2.145: 71.1518% ( 1181) 00:15:43.130 2.145 - 2.157: 76.4604% ( 707) 00:15:43.130 2.157 - 2.169: 77.6543% ( 159) 00:15:43.130 2.169 - 2.181: 81.0182% ( 448) 00:15:43.130 2.181 - 2.193: 82.5574% ( 205) 00:15:43.130 2.193 - 2.204: 83.1131% ( 74) 00:15:43.130 2.204 - 2.216: 86.4544% ( 445) 00:15:43.130 2.216 - 2.228: 89.5330% ( 410) 00:15:43.130 2.228 - 2.240: 91.3125% ( 237) 00:15:43.130 2.240 - 2.252: 92.8818% ( 209) 00:15:43.130 2.252 - 2.264: 93.6552% ( 103) 00:15:43.130 2.264 - 2.276: 93.8805% ( 30) 00:15:43.130 2.276 - 2.287: 94.2259% ( 46) 00:15:43.130 2.287 - 2.299: 94.7214% ( 66) 00:15:43.130 2.299 - 2.311: 95.2170% ( 66) 00:15:43.130 2.311 - 2.323: 95.4648% ( 33) 00:15:43.130 2.323 - 2.335: 95.5699% ( 14) 00:15:43.130 2.335 - 2.347: 95.6074% ( 5) 00:15:43.130 2.347 - 2.359: 95.6825% ( 10) 00:15:43.130 2.359 - 2.370: 95.9228% ( 32) 00:15:43.130 2.370 - 2.382: 96.2081% ( 38) 00:15:43.130 2.382 - 2.394: 96.5385% ( 44) 00:15:43.130 2.394 - 2.406: 96.8238% ( 38) 00:15:43.130 2.406 - 2.418: 97.0416% ( 29) 00:15:43.130 2.418 - 2.430: 97.2368% ( 26) 00:15:43.130 2.430 - 2.441: 97.4471% ( 28) 00:15:43.130 2.441 - 2.453: 97.5672% ( 16) 00:15:43.130 2.453 - 2.465: 97.7324% ( 22) 00:15:43.130 2.465 - 2.477: 97.8525% ( 16) 00:15:43.130 2.477 - 2.489: 97.9501% ( 13) 00:15:43.130 2.489 - 2.501: 98.0553% ( 14) 00:15:43.130 2.501 - 2.513: 98.1303% ( 10) 00:15:43.130 2.513 - 2.524: 98.2054% ( 10) 00:15:43.130 2.524 - 2.536: 98.2280% ( 3) 00:15:43.130 2.536 - 2.548: 98.2430% ( 2) 00:15:43.130 2.548 - 2.560: 98.2655% ( 3) 00:15:43.130 2.560 - 2.572: 98.2805% ( 2) 00:15:43.130 2.584 - 2.596: 98.2880% ( 1) 00:15:43.130 2.596 - 2.607: 98.3106% ( 3) 00:15:43.130 2.607 - 2.619: 98.3256% ( 2) 00:15:43.130 2.631 - 2.643: 98.3481% ( 3) 00:15:43.130 2.643 - 2.655: 98.3556% ( 1) 00:15:43.130 2.655 - 2.667: 98.3631% ( 1) 00:15:43.130 2.785 - 2.797: 98.3706% ( 1) 00:15:43.130 2.892 - 2.904: 98.3781% ( 1) 00:15:43.130 2.916 - 2.927: 98.3932% ( 2) 00:15:43.130 2.927 - 2.939: 98.4082% ( 2) 00:15:43.130 2.951 - 2.963: 98.4157% ( 1) 00:15:43.130 2.963 - 2.975: 98.4232% ( 1) 00:15:43.130 2.975 - 2.987: 98.4382% ( 2) 00:15:43.130 2.987 - 2.999: 98.4457% ( 1) 00:15:43.130 3.034 - 3.058: 98.4532% ( 1) 00:15:43.130 3.058 - 3.081: 98.4682% ( 2) 00:15:43.130 3.129 - 3.153: 98.4833% ( 2) 00:15:43.130 3.153 - 3.176: 98.5058% ( 3) 00:15:43.130 3.176 - 3.200: 98.5208% ( 2) 00:15:43.130 3.200 - 3.224: 98.5283% ( 1) 00:15:43.130 3.247 - 3.271: 98.5659% ( 5) 00:15:43.130 3.271 - 3.295: 98.5884% ( 3) 00:15:43.130 3.295 - 3.319: 98.6409% ( 7) 00:15:43.130 3.319 - 3.342: 98.6560% ( 2) 00:15:43.130 3.342 - 3.366: 98.6710% ( 2) 00:15:43.130 3.390 - 3.413: 98.6935% ( 3) 00:15:43.130 3.413 - 3.437: 98.7010% ( 1) 00:15:43.130 3.437 - 3.461: 98.7160% ( 2) 00:15:43.130 3.461 - 3.484: 98.7461% ( 4) 00:15:43.130 3.484 - 3.508: 98.7611% ( 2) 00:15:43.130 3.556 - 3.579: 98.7911% ( 4) 00:15:43.130 3.579 - 3.603: 98.7986% ( 1) 00:15:43.130 3.603 - 3.627: 98.8061% ( 1) 00:15:43.130 3.674 - 3.698: 9[2024-07-14 09:25:27.221584] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:43.130 8.8136% ( 1) 00:15:43.130 3.816 - 3.840: 98.8287% ( 2) 00:15:43.130 3.935 - 3.959: 98.8512% ( 3) 00:15:43.130 3.982 - 4.006: 98.8587% ( 1) 00:15:43.130 5.120 - 5.144: 98.8662% ( 1) 00:15:43.130 5.547 - 5.570: 98.8737% ( 1) 00:15:43.130 5.713 - 5.736: 98.8812% ( 1) 00:15:43.130 5.760 - 5.784: 98.8887% ( 1) 00:15:43.130 5.926 - 5.950: 98.8962% ( 1) 00:15:43.130 6.021 - 6.044: 98.9037% ( 1) 00:15:43.130 6.163 - 6.210: 98.9112% ( 1) 00:15:43.130 6.258 - 6.305: 98.9263% ( 2) 00:15:43.130 6.353 - 6.400: 98.9338% ( 1) 00:15:43.130 6.400 - 6.447: 98.9413% ( 1) 00:15:43.130 6.542 - 6.590: 98.9488% ( 1) 00:15:43.130 6.637 - 6.684: 98.9563% ( 1) 00:15:43.130 6.827 - 6.874: 98.9638% ( 1) 00:15:43.130 6.874 - 6.921: 98.9713% ( 1) 00:15:43.130 7.443 - 7.490: 98.9788% ( 1) 00:15:43.130 9.339 - 9.387: 98.9863% ( 1) 00:15:43.130 9.576 - 9.624: 98.9938% ( 1) 00:15:43.130 11.378 - 11.425: 99.0014% ( 1) 00:15:43.130 15.455 - 15.550: 99.0089% ( 1) 00:15:43.130 15.644 - 15.739: 99.0389% ( 4) 00:15:43.130 15.834 - 15.929: 99.0539% ( 2) 00:15:43.130 15.929 - 16.024: 99.0689% ( 2) 00:15:43.130 16.024 - 16.119: 99.0764% ( 1) 00:15:43.130 16.119 - 16.213: 99.0990% ( 3) 00:15:43.130 16.213 - 16.308: 99.1290% ( 4) 00:15:43.130 16.308 - 16.403: 99.1590% ( 4) 00:15:43.130 16.403 - 16.498: 99.1966% ( 5) 00:15:43.130 16.498 - 16.593: 99.2041% ( 1) 00:15:43.130 16.593 - 16.687: 99.2566% ( 7) 00:15:43.130 16.687 - 16.782: 99.2717% ( 2) 00:15:43.130 16.782 - 16.877: 99.2867% ( 2) 00:15:43.130 16.877 - 16.972: 99.2942% ( 1) 00:15:43.130 16.972 - 17.067: 99.3167% ( 3) 00:15:43.130 17.067 - 17.161: 99.3392% ( 3) 00:15:43.130 17.161 - 17.256: 99.3467% ( 1) 00:15:43.130 17.256 - 17.351: 99.3543% ( 1) 00:15:43.130 17.351 - 17.446: 99.3618% ( 1) 00:15:43.130 17.920 - 18.015: 99.3693% ( 1) 00:15:43.130 18.868 - 18.963: 99.3768% ( 1) 00:15:43.130 3980.705 - 4004.978: 99.9249% ( 73) 00:15:43.130 4004.978 - 4029.250: 100.0000% ( 10) 00:15:43.130 00:15:43.130 09:25:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:43.130 09:25:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:43.130 09:25:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:43.130 09:25:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:43.130 09:25:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:43.130 [ 00:15:43.130 { 00:15:43.130 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:43.130 "subtype": "Discovery", 00:15:43.130 "listen_addresses": [], 00:15:43.130 "allow_any_host": true, 00:15:43.130 "hosts": [] 00:15:43.130 }, 00:15:43.130 { 00:15:43.130 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:43.130 "subtype": "NVMe", 00:15:43.130 "listen_addresses": [ 00:15:43.130 { 00:15:43.130 "trtype": "VFIOUSER", 00:15:43.130 "adrfam": "IPv4", 00:15:43.130 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:43.130 "trsvcid": "0" 00:15:43.130 } 00:15:43.130 ], 00:15:43.130 "allow_any_host": true, 00:15:43.130 "hosts": [], 00:15:43.130 "serial_number": "SPDK1", 00:15:43.130 "model_number": "SPDK bdev Controller", 00:15:43.130 "max_namespaces": 32, 00:15:43.130 "min_cntlid": 1, 00:15:43.130 "max_cntlid": 65519, 00:15:43.130 "namespaces": [ 00:15:43.130 { 00:15:43.130 "nsid": 1, 00:15:43.131 "bdev_name": "Malloc1", 00:15:43.131 "name": "Malloc1", 00:15:43.131 "nguid": "72E04618E719425D9486CD4295638238", 00:15:43.131 "uuid": "72e04618-e719-425d-9486-cd4295638238" 00:15:43.131 } 00:15:43.131 ] 00:15:43.131 }, 00:15:43.131 { 00:15:43.131 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:43.131 "subtype": "NVMe", 00:15:43.131 "listen_addresses": [ 00:15:43.131 { 00:15:43.131 "trtype": "VFIOUSER", 00:15:43.131 "adrfam": "IPv4", 00:15:43.131 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:43.131 "trsvcid": "0" 00:15:43.131 } 00:15:43.131 ], 00:15:43.131 "allow_any_host": true, 00:15:43.131 "hosts": [], 00:15:43.131 "serial_number": "SPDK2", 00:15:43.131 "model_number": "SPDK bdev Controller", 00:15:43.131 "max_namespaces": 32, 00:15:43.131 "min_cntlid": 1, 00:15:43.131 "max_cntlid": 65519, 00:15:43.131 "namespaces": [ 00:15:43.131 { 00:15:43.131 "nsid": 1, 00:15:43.131 "bdev_name": "Malloc2", 00:15:43.131 "name": "Malloc2", 00:15:43.131 "nguid": "B1B83843791D46559569545530C1952A", 00:15:43.131 "uuid": "b1b83843-791d-4655-9569-545530c1952a" 00:15:43.131 } 00:15:43.131 ] 00:15:43.131 } 00:15:43.131 ] 00:15:43.131 09:25:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:43.131 09:25:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=704233 00:15:43.131 09:25:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:43.131 09:25:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:43.131 09:25:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:43.131 09:25:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:43.131 09:25:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:43.131 09:25:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:43.131 09:25:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:43.131 09:25:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:43.389 EAL: No free 2048 kB hugepages reported on node 1 00:15:43.389 [2024-07-14 09:25:27.729371] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:43.389 Malloc3 00:15:43.647 09:25:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:43.647 [2024-07-14 09:25:28.089959] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:43.906 09:25:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:43.906 Asynchronous Event Request test 00:15:43.906 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:43.906 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:43.906 Registering asynchronous event callbacks... 00:15:43.906 Starting namespace attribute notice tests for all controllers... 00:15:43.906 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:43.906 aer_cb - Changed Namespace 00:15:43.906 Cleaning up... 00:15:43.906 [ 00:15:43.906 { 00:15:43.906 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:43.906 "subtype": "Discovery", 00:15:43.906 "listen_addresses": [], 00:15:43.906 "allow_any_host": true, 00:15:43.906 "hosts": [] 00:15:43.906 }, 00:15:43.906 { 00:15:43.906 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:43.906 "subtype": "NVMe", 00:15:43.906 "listen_addresses": [ 00:15:43.906 { 00:15:43.906 "trtype": "VFIOUSER", 00:15:43.906 "adrfam": "IPv4", 00:15:43.906 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:43.906 "trsvcid": "0" 00:15:43.906 } 00:15:43.906 ], 00:15:43.906 "allow_any_host": true, 00:15:43.906 "hosts": [], 00:15:43.906 "serial_number": "SPDK1", 00:15:43.906 "model_number": "SPDK bdev Controller", 00:15:43.906 "max_namespaces": 32, 00:15:43.906 "min_cntlid": 1, 00:15:43.906 "max_cntlid": 65519, 00:15:43.906 "namespaces": [ 00:15:43.906 { 00:15:43.906 "nsid": 1, 00:15:43.906 "bdev_name": "Malloc1", 00:15:43.906 "name": "Malloc1", 00:15:43.906 "nguid": "72E04618E719425D9486CD4295638238", 00:15:43.906 "uuid": "72e04618-e719-425d-9486-cd4295638238" 00:15:43.906 }, 00:15:43.906 { 00:15:43.906 "nsid": 2, 00:15:43.906 "bdev_name": "Malloc3", 00:15:43.906 "name": "Malloc3", 00:15:43.906 "nguid": "23258149379C466187EE93E7DFA268BF", 00:15:43.906 "uuid": "23258149-379c-4661-87ee-93e7dfa268bf" 00:15:43.906 } 00:15:43.906 ] 00:15:43.906 }, 00:15:43.906 { 00:15:43.906 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:43.906 "subtype": "NVMe", 00:15:43.906 "listen_addresses": [ 00:15:43.906 { 00:15:43.906 "trtype": "VFIOUSER", 00:15:43.906 "adrfam": "IPv4", 00:15:43.906 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:43.906 "trsvcid": "0" 00:15:43.906 } 00:15:43.906 ], 00:15:43.906 "allow_any_host": true, 00:15:43.906 "hosts": [], 00:15:43.906 "serial_number": "SPDK2", 00:15:43.906 "model_number": "SPDK bdev Controller", 00:15:43.906 "max_namespaces": 32, 00:15:43.906 "min_cntlid": 1, 00:15:43.906 "max_cntlid": 65519, 00:15:43.906 "namespaces": [ 00:15:43.906 { 00:15:43.906 "nsid": 1, 00:15:43.906 "bdev_name": "Malloc2", 00:15:43.906 "name": "Malloc2", 00:15:43.906 "nguid": "B1B83843791D46559569545530C1952A", 00:15:43.906 "uuid": "b1b83843-791d-4655-9569-545530c1952a" 00:15:43.906 } 00:15:43.906 ] 00:15:43.906 } 00:15:43.906 ] 00:15:43.906 09:25:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 704233 00:15:43.906 09:25:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:43.906 09:25:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:43.906 09:25:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:43.906 09:25:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:44.166 [2024-07-14 09:25:28.367304] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:15:44.166 [2024-07-14 09:25:28.367349] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid704369 ] 00:15:44.166 EAL: No free 2048 kB hugepages reported on node 1 00:15:44.166 [2024-07-14 09:25:28.399959] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:44.166 [2024-07-14 09:25:28.409152] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:44.166 [2024-07-14 09:25:28.409195] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f379f400000 00:15:44.166 [2024-07-14 09:25:28.410153] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:44.166 [2024-07-14 09:25:28.411173] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:44.166 [2024-07-14 09:25:28.412171] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:44.166 [2024-07-14 09:25:28.413174] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:44.166 [2024-07-14 09:25:28.414184] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:44.166 [2024-07-14 09:25:28.415194] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:44.166 [2024-07-14 09:25:28.416218] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:44.166 [2024-07-14 09:25:28.417205] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:44.166 [2024-07-14 09:25:28.418232] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:44.166 [2024-07-14 09:25:28.418254] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f379e1b4000 00:15:44.166 [2024-07-14 09:25:28.419371] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:44.166 [2024-07-14 09:25:28.434516] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:44.166 [2024-07-14 09:25:28.434546] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:44.166 [2024-07-14 09:25:28.439659] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:44.166 [2024-07-14 09:25:28.439708] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:44.166 [2024-07-14 09:25:28.439792] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:44.166 [2024-07-14 09:25:28.439817] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:44.166 [2024-07-14 09:25:28.439827] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:44.166 [2024-07-14 09:25:28.440662] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:44.166 [2024-07-14 09:25:28.440681] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:44.166 [2024-07-14 09:25:28.440693] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:44.166 [2024-07-14 09:25:28.441669] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:44.166 [2024-07-14 09:25:28.441688] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:44.166 [2024-07-14 09:25:28.441701] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:44.166 [2024-07-14 09:25:28.442677] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:44.166 [2024-07-14 09:25:28.442696] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:44.166 [2024-07-14 09:25:28.443682] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:44.166 [2024-07-14 09:25:28.443701] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:44.166 [2024-07-14 09:25:28.443710] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:44.166 [2024-07-14 09:25:28.443721] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:44.166 [2024-07-14 09:25:28.443834] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:44.166 [2024-07-14 09:25:28.443843] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:44.166 [2024-07-14 09:25:28.443871] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:44.166 [2024-07-14 09:25:28.444688] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:44.166 [2024-07-14 09:25:28.445696] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:44.166 [2024-07-14 09:25:28.446700] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:44.166 [2024-07-14 09:25:28.447700] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:44.166 [2024-07-14 09:25:28.447779] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:44.166 [2024-07-14 09:25:28.448719] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:44.166 [2024-07-14 09:25:28.448738] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:44.166 [2024-07-14 09:25:28.448748] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:44.166 [2024-07-14 09:25:28.448771] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:44.166 [2024-07-14 09:25:28.448785] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:44.166 [2024-07-14 09:25:28.448805] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:44.166 [2024-07-14 09:25:28.448815] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:44.166 [2024-07-14 09:25:28.448832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:44.166 [2024-07-14 09:25:28.456879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:44.166 [2024-07-14 09:25:28.456901] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:44.166 [2024-07-14 09:25:28.456915] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:44.166 [2024-07-14 09:25:28.456923] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:44.166 [2024-07-14 09:25:28.456931] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:44.166 [2024-07-14 09:25:28.456939] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:44.166 [2024-07-14 09:25:28.456948] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:44.166 [2024-07-14 09:25:28.456956] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:44.166 [2024-07-14 09:25:28.456969] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:44.166 [2024-07-14 09:25:28.456989] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:44.166 [2024-07-14 09:25:28.464878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:44.166 [2024-07-14 09:25:28.464906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:44.166 [2024-07-14 09:25:28.464921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:44.166 [2024-07-14 09:25:28.464934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:44.166 [2024-07-14 09:25:28.464946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:44.167 [2024-07-14 09:25:28.464955] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:44.167 [2024-07-14 09:25:28.464970] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:44.167 [2024-07-14 09:25:28.464985] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:44.167 [2024-07-14 09:25:28.472877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:44.167 [2024-07-14 09:25:28.472896] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:44.167 [2024-07-14 09:25:28.472905] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:44.167 [2024-07-14 09:25:28.472917] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:44.167 [2024-07-14 09:25:28.472927] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:44.167 [2024-07-14 09:25:28.472941] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:44.167 [2024-07-14 09:25:28.480877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:44.167 [2024-07-14 09:25:28.480948] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:44.167 [2024-07-14 09:25:28.480963] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:44.167 [2024-07-14 09:25:28.480976] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:44.167 [2024-07-14 09:25:28.480984] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:44.167 [2024-07-14 09:25:28.480994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:44.167 [2024-07-14 09:25:28.488876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:44.167 [2024-07-14 09:25:28.488904] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:44.167 [2024-07-14 09:25:28.488920] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:44.167 [2024-07-14 09:25:28.488935] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:44.167 [2024-07-14 09:25:28.488951] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:44.167 [2024-07-14 09:25:28.488960] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:44.167 [2024-07-14 09:25:28.488970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:44.167 [2024-07-14 09:25:28.496878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:44.167 [2024-07-14 09:25:28.496905] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:44.167 [2024-07-14 09:25:28.496920] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:44.167 [2024-07-14 09:25:28.496934] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:44.167 [2024-07-14 09:25:28.496942] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:44.167 [2024-07-14 09:25:28.496952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:44.167 [2024-07-14 09:25:28.504876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:44.167 [2024-07-14 09:25:28.504896] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:44.167 [2024-07-14 09:25:28.504909] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:44.167 [2024-07-14 09:25:28.504924] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:44.167 [2024-07-14 09:25:28.504934] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:15:44.167 [2024-07-14 09:25:28.504942] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:44.167 [2024-07-14 09:25:28.504950] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:44.167 [2024-07-14 09:25:28.504958] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:44.167 [2024-07-14 09:25:28.504966] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:44.167 [2024-07-14 09:25:28.504974] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:44.167 [2024-07-14 09:25:28.504998] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:44.167 [2024-07-14 09:25:28.512874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:44.167 [2024-07-14 09:25:28.512901] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:44.167 [2024-07-14 09:25:28.520878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:44.167 [2024-07-14 09:25:28.520903] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:44.167 [2024-07-14 09:25:28.528890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:44.167 [2024-07-14 09:25:28.528923] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:44.167 [2024-07-14 09:25:28.536878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:44.167 [2024-07-14 09:25:28.536909] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:44.167 [2024-07-14 09:25:28.536921] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:44.167 [2024-07-14 09:25:28.536927] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:44.167 [2024-07-14 09:25:28.536933] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:44.167 [2024-07-14 09:25:28.536942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:44.167 [2024-07-14 09:25:28.536954] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:44.167 [2024-07-14 09:25:28.536962] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:44.167 [2024-07-14 09:25:28.536971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:44.167 [2024-07-14 09:25:28.536982] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:44.167 [2024-07-14 09:25:28.536990] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:44.167 [2024-07-14 09:25:28.536999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:44.167 [2024-07-14 09:25:28.537011] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:44.167 [2024-07-14 09:25:28.537019] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:44.167 [2024-07-14 09:25:28.537028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:44.167 [2024-07-14 09:25:28.544875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:44.167 [2024-07-14 09:25:28.544902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:44.167 [2024-07-14 09:25:28.544920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:44.167 [2024-07-14 09:25:28.544932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:44.167 ===================================================== 00:15:44.167 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:44.167 ===================================================== 00:15:44.167 Controller Capabilities/Features 00:15:44.167 ================================ 00:15:44.167 Vendor ID: 4e58 00:15:44.167 Subsystem Vendor ID: 4e58 00:15:44.167 Serial Number: SPDK2 00:15:44.167 Model Number: SPDK bdev Controller 00:15:44.167 Firmware Version: 24.09 00:15:44.167 Recommended Arb Burst: 6 00:15:44.167 IEEE OUI Identifier: 8d 6b 50 00:15:44.167 Multi-path I/O 00:15:44.167 May have multiple subsystem ports: Yes 00:15:44.167 May have multiple controllers: Yes 00:15:44.167 Associated with SR-IOV VF: No 00:15:44.167 Max Data Transfer Size: 131072 00:15:44.167 Max Number of Namespaces: 32 00:15:44.167 Max Number of I/O Queues: 127 00:15:44.167 NVMe Specification Version (VS): 1.3 00:15:44.167 NVMe Specification Version (Identify): 1.3 00:15:44.167 Maximum Queue Entries: 256 00:15:44.167 Contiguous Queues Required: Yes 00:15:44.167 Arbitration Mechanisms Supported 00:15:44.167 Weighted Round Robin: Not Supported 00:15:44.167 Vendor Specific: Not Supported 00:15:44.167 Reset Timeout: 15000 ms 00:15:44.167 Doorbell Stride: 4 bytes 00:15:44.167 NVM Subsystem Reset: Not Supported 00:15:44.167 Command Sets Supported 00:15:44.167 NVM Command Set: Supported 00:15:44.167 Boot Partition: Not Supported 00:15:44.167 Memory Page Size Minimum: 4096 bytes 00:15:44.167 Memory Page Size Maximum: 4096 bytes 00:15:44.167 Persistent Memory Region: Not Supported 00:15:44.167 Optional Asynchronous Events Supported 00:15:44.167 Namespace Attribute Notices: Supported 00:15:44.167 Firmware Activation Notices: Not Supported 00:15:44.167 ANA Change Notices: Not Supported 00:15:44.167 PLE Aggregate Log Change Notices: Not Supported 00:15:44.167 LBA Status Info Alert Notices: Not Supported 00:15:44.167 EGE Aggregate Log Change Notices: Not Supported 00:15:44.167 Normal NVM Subsystem Shutdown event: Not Supported 00:15:44.167 Zone Descriptor Change Notices: Not Supported 00:15:44.167 Discovery Log Change Notices: Not Supported 00:15:44.167 Controller Attributes 00:15:44.167 128-bit Host Identifier: Supported 00:15:44.167 Non-Operational Permissive Mode: Not Supported 00:15:44.167 NVM Sets: Not Supported 00:15:44.167 Read Recovery Levels: Not Supported 00:15:44.167 Endurance Groups: Not Supported 00:15:44.167 Predictable Latency Mode: Not Supported 00:15:44.167 Traffic Based Keep ALive: Not Supported 00:15:44.167 Namespace Granularity: Not Supported 00:15:44.168 SQ Associations: Not Supported 00:15:44.168 UUID List: Not Supported 00:15:44.168 Multi-Domain Subsystem: Not Supported 00:15:44.168 Fixed Capacity Management: Not Supported 00:15:44.168 Variable Capacity Management: Not Supported 00:15:44.168 Delete Endurance Group: Not Supported 00:15:44.168 Delete NVM Set: Not Supported 00:15:44.168 Extended LBA Formats Supported: Not Supported 00:15:44.168 Flexible Data Placement Supported: Not Supported 00:15:44.168 00:15:44.168 Controller Memory Buffer Support 00:15:44.168 ================================ 00:15:44.168 Supported: No 00:15:44.168 00:15:44.168 Persistent Memory Region Support 00:15:44.168 ================================ 00:15:44.168 Supported: No 00:15:44.168 00:15:44.168 Admin Command Set Attributes 00:15:44.168 ============================ 00:15:44.168 Security Send/Receive: Not Supported 00:15:44.168 Format NVM: Not Supported 00:15:44.168 Firmware Activate/Download: Not Supported 00:15:44.168 Namespace Management: Not Supported 00:15:44.168 Device Self-Test: Not Supported 00:15:44.168 Directives: Not Supported 00:15:44.168 NVMe-MI: Not Supported 00:15:44.168 Virtualization Management: Not Supported 00:15:44.168 Doorbell Buffer Config: Not Supported 00:15:44.168 Get LBA Status Capability: Not Supported 00:15:44.168 Command & Feature Lockdown Capability: Not Supported 00:15:44.168 Abort Command Limit: 4 00:15:44.168 Async Event Request Limit: 4 00:15:44.168 Number of Firmware Slots: N/A 00:15:44.168 Firmware Slot 1 Read-Only: N/A 00:15:44.168 Firmware Activation Without Reset: N/A 00:15:44.168 Multiple Update Detection Support: N/A 00:15:44.168 Firmware Update Granularity: No Information Provided 00:15:44.168 Per-Namespace SMART Log: No 00:15:44.168 Asymmetric Namespace Access Log Page: Not Supported 00:15:44.168 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:44.168 Command Effects Log Page: Supported 00:15:44.168 Get Log Page Extended Data: Supported 00:15:44.168 Telemetry Log Pages: Not Supported 00:15:44.168 Persistent Event Log Pages: Not Supported 00:15:44.168 Supported Log Pages Log Page: May Support 00:15:44.168 Commands Supported & Effects Log Page: Not Supported 00:15:44.168 Feature Identifiers & Effects Log Page:May Support 00:15:44.168 NVMe-MI Commands & Effects Log Page: May Support 00:15:44.168 Data Area 4 for Telemetry Log: Not Supported 00:15:44.168 Error Log Page Entries Supported: 128 00:15:44.168 Keep Alive: Supported 00:15:44.168 Keep Alive Granularity: 10000 ms 00:15:44.168 00:15:44.168 NVM Command Set Attributes 00:15:44.168 ========================== 00:15:44.168 Submission Queue Entry Size 00:15:44.168 Max: 64 00:15:44.168 Min: 64 00:15:44.168 Completion Queue Entry Size 00:15:44.168 Max: 16 00:15:44.168 Min: 16 00:15:44.168 Number of Namespaces: 32 00:15:44.168 Compare Command: Supported 00:15:44.168 Write Uncorrectable Command: Not Supported 00:15:44.168 Dataset Management Command: Supported 00:15:44.168 Write Zeroes Command: Supported 00:15:44.168 Set Features Save Field: Not Supported 00:15:44.168 Reservations: Not Supported 00:15:44.168 Timestamp: Not Supported 00:15:44.168 Copy: Supported 00:15:44.168 Volatile Write Cache: Present 00:15:44.168 Atomic Write Unit (Normal): 1 00:15:44.168 Atomic Write Unit (PFail): 1 00:15:44.168 Atomic Compare & Write Unit: 1 00:15:44.168 Fused Compare & Write: Supported 00:15:44.168 Scatter-Gather List 00:15:44.168 SGL Command Set: Supported (Dword aligned) 00:15:44.168 SGL Keyed: Not Supported 00:15:44.168 SGL Bit Bucket Descriptor: Not Supported 00:15:44.168 SGL Metadata Pointer: Not Supported 00:15:44.168 Oversized SGL: Not Supported 00:15:44.168 SGL Metadata Address: Not Supported 00:15:44.168 SGL Offset: Not Supported 00:15:44.168 Transport SGL Data Block: Not Supported 00:15:44.168 Replay Protected Memory Block: Not Supported 00:15:44.168 00:15:44.168 Firmware Slot Information 00:15:44.168 ========================= 00:15:44.168 Active slot: 1 00:15:44.168 Slot 1 Firmware Revision: 24.09 00:15:44.168 00:15:44.168 00:15:44.168 Commands Supported and Effects 00:15:44.168 ============================== 00:15:44.168 Admin Commands 00:15:44.168 -------------- 00:15:44.168 Get Log Page (02h): Supported 00:15:44.168 Identify (06h): Supported 00:15:44.168 Abort (08h): Supported 00:15:44.168 Set Features (09h): Supported 00:15:44.168 Get Features (0Ah): Supported 00:15:44.168 Asynchronous Event Request (0Ch): Supported 00:15:44.168 Keep Alive (18h): Supported 00:15:44.168 I/O Commands 00:15:44.168 ------------ 00:15:44.168 Flush (00h): Supported LBA-Change 00:15:44.168 Write (01h): Supported LBA-Change 00:15:44.168 Read (02h): Supported 00:15:44.168 Compare (05h): Supported 00:15:44.168 Write Zeroes (08h): Supported LBA-Change 00:15:44.168 Dataset Management (09h): Supported LBA-Change 00:15:44.168 Copy (19h): Supported LBA-Change 00:15:44.168 00:15:44.168 Error Log 00:15:44.168 ========= 00:15:44.168 00:15:44.168 Arbitration 00:15:44.168 =========== 00:15:44.168 Arbitration Burst: 1 00:15:44.168 00:15:44.168 Power Management 00:15:44.168 ================ 00:15:44.168 Number of Power States: 1 00:15:44.168 Current Power State: Power State #0 00:15:44.168 Power State #0: 00:15:44.168 Max Power: 0.00 W 00:15:44.168 Non-Operational State: Operational 00:15:44.168 Entry Latency: Not Reported 00:15:44.168 Exit Latency: Not Reported 00:15:44.168 Relative Read Throughput: 0 00:15:44.168 Relative Read Latency: 0 00:15:44.168 Relative Write Throughput: 0 00:15:44.168 Relative Write Latency: 0 00:15:44.168 Idle Power: Not Reported 00:15:44.168 Active Power: Not Reported 00:15:44.168 Non-Operational Permissive Mode: Not Supported 00:15:44.168 00:15:44.168 Health Information 00:15:44.168 ================== 00:15:44.168 Critical Warnings: 00:15:44.168 Available Spare Space: OK 00:15:44.168 Temperature: OK 00:15:44.168 Device Reliability: OK 00:15:44.168 Read Only: No 00:15:44.168 Volatile Memory Backup: OK 00:15:44.168 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:44.168 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:44.168 Available Spare: 0% 00:15:44.168 Available Sp[2024-07-14 09:25:28.545047] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:44.168 [2024-07-14 09:25:28.552874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:44.168 [2024-07-14 09:25:28.552923] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:44.168 [2024-07-14 09:25:28.552942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.168 [2024-07-14 09:25:28.552953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.168 [2024-07-14 09:25:28.552963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.168 [2024-07-14 09:25:28.552972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:44.168 [2024-07-14 09:25:28.553038] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:44.168 [2024-07-14 09:25:28.553065] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:44.168 [2024-07-14 09:25:28.554038] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:44.168 [2024-07-14 09:25:28.554109] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:44.168 [2024-07-14 09:25:28.554124] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:44.168 [2024-07-14 09:25:28.555047] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:44.168 [2024-07-14 09:25:28.555071] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:44.168 [2024-07-14 09:25:28.555123] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:44.168 [2024-07-14 09:25:28.556325] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:44.168 are Threshold: 0% 00:15:44.168 Life Percentage Used: 0% 00:15:44.168 Data Units Read: 0 00:15:44.168 Data Units Written: 0 00:15:44.168 Host Read Commands: 0 00:15:44.168 Host Write Commands: 0 00:15:44.168 Controller Busy Time: 0 minutes 00:15:44.168 Power Cycles: 0 00:15:44.168 Power On Hours: 0 hours 00:15:44.168 Unsafe Shutdowns: 0 00:15:44.168 Unrecoverable Media Errors: 0 00:15:44.168 Lifetime Error Log Entries: 0 00:15:44.168 Warning Temperature Time: 0 minutes 00:15:44.168 Critical Temperature Time: 0 minutes 00:15:44.168 00:15:44.168 Number of Queues 00:15:44.168 ================ 00:15:44.168 Number of I/O Submission Queues: 127 00:15:44.168 Number of I/O Completion Queues: 127 00:15:44.168 00:15:44.168 Active Namespaces 00:15:44.168 ================= 00:15:44.168 Namespace ID:1 00:15:44.168 Error Recovery Timeout: Unlimited 00:15:44.168 Command Set Identifier: NVM (00h) 00:15:44.168 Deallocate: Supported 00:15:44.168 Deallocated/Unwritten Error: Not Supported 00:15:44.168 Deallocated Read Value: Unknown 00:15:44.168 Deallocate in Write Zeroes: Not Supported 00:15:44.168 Deallocated Guard Field: 0xFFFF 00:15:44.168 Flush: Supported 00:15:44.168 Reservation: Supported 00:15:44.168 Namespace Sharing Capabilities: Multiple Controllers 00:15:44.168 Size (in LBAs): 131072 (0GiB) 00:15:44.168 Capacity (in LBAs): 131072 (0GiB) 00:15:44.169 Utilization (in LBAs): 131072 (0GiB) 00:15:44.169 NGUID: B1B83843791D46559569545530C1952A 00:15:44.169 UUID: b1b83843-791d-4655-9569-545530c1952a 00:15:44.169 Thin Provisioning: Not Supported 00:15:44.169 Per-NS Atomic Units: Yes 00:15:44.169 Atomic Boundary Size (Normal): 0 00:15:44.169 Atomic Boundary Size (PFail): 0 00:15:44.169 Atomic Boundary Offset: 0 00:15:44.169 Maximum Single Source Range Length: 65535 00:15:44.169 Maximum Copy Length: 65535 00:15:44.169 Maximum Source Range Count: 1 00:15:44.169 NGUID/EUI64 Never Reused: No 00:15:44.169 Namespace Write Protected: No 00:15:44.169 Number of LBA Formats: 1 00:15:44.169 Current LBA Format: LBA Format #00 00:15:44.169 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:44.169 00:15:44.169 09:25:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:44.427 EAL: No free 2048 kB hugepages reported on node 1 00:15:44.427 [2024-07-14 09:25:28.781633] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:49.695 Initializing NVMe Controllers 00:15:49.695 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:49.695 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:49.695 Initialization complete. Launching workers. 00:15:49.695 ======================================================== 00:15:49.695 Latency(us) 00:15:49.695 Device Information : IOPS MiB/s Average min max 00:15:49.695 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34610.16 135.20 3699.26 1172.74 7633.61 00:15:49.695 ======================================================== 00:15:49.695 Total : 34610.16 135.20 3699.26 1172.74 7633.61 00:15:49.695 00:15:49.695 [2024-07-14 09:25:33.885226] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:49.695 09:25:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:49.695 EAL: No free 2048 kB hugepages reported on node 1 00:15:49.695 [2024-07-14 09:25:34.117878] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:54.959 Initializing NVMe Controllers 00:15:54.959 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:54.959 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:54.959 Initialization complete. Launching workers. 00:15:54.959 ======================================================== 00:15:54.959 Latency(us) 00:15:54.959 Device Information : IOPS MiB/s Average min max 00:15:54.959 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31580.32 123.36 4052.40 1220.32 7855.62 00:15:54.959 ======================================================== 00:15:54.959 Total : 31580.32 123.36 4052.40 1220.32 7855.62 00:15:54.959 00:15:54.959 [2024-07-14 09:25:39.141190] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:54.959 09:25:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:54.959 EAL: No free 2048 kB hugepages reported on node 1 00:15:54.959 [2024-07-14 09:25:39.354072] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:00.255 [2024-07-14 09:25:44.491006] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:00.255 Initializing NVMe Controllers 00:16:00.255 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:00.255 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:00.255 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:00.255 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:00.255 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:00.255 Initialization complete. Launching workers. 00:16:00.255 Starting thread on core 2 00:16:00.255 Starting thread on core 3 00:16:00.255 Starting thread on core 1 00:16:00.255 09:25:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:00.255 EAL: No free 2048 kB hugepages reported on node 1 00:16:00.512 [2024-07-14 09:25:44.792365] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:03.789 [2024-07-14 09:25:47.996933] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:03.789 Initializing NVMe Controllers 00:16:03.789 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:03.789 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:03.789 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:03.789 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:03.789 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:03.789 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:03.789 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:03.789 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:03.789 Initialization complete. Launching workers. 00:16:03.789 Starting thread on core 1 with urgent priority queue 00:16:03.789 Starting thread on core 2 with urgent priority queue 00:16:03.789 Starting thread on core 3 with urgent priority queue 00:16:03.789 Starting thread on core 0 with urgent priority queue 00:16:03.789 SPDK bdev Controller (SPDK2 ) core 0: 5231.00 IO/s 19.12 secs/100000 ios 00:16:03.789 SPDK bdev Controller (SPDK2 ) core 1: 4845.33 IO/s 20.64 secs/100000 ios 00:16:03.789 SPDK bdev Controller (SPDK2 ) core 2: 5400.00 IO/s 18.52 secs/100000 ios 00:16:03.789 SPDK bdev Controller (SPDK2 ) core 3: 4258.67 IO/s 23.48 secs/100000 ios 00:16:03.789 ======================================================== 00:16:03.789 00:16:03.789 09:25:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:03.789 EAL: No free 2048 kB hugepages reported on node 1 00:16:04.047 [2024-07-14 09:25:48.297408] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:04.047 Initializing NVMe Controllers 00:16:04.047 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:04.047 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:04.047 Namespace ID: 1 size: 0GB 00:16:04.047 Initialization complete. 00:16:04.047 INFO: using host memory buffer for IO 00:16:04.047 Hello world! 00:16:04.047 [2024-07-14 09:25:48.310526] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:04.047 09:25:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:04.047 EAL: No free 2048 kB hugepages reported on node 1 00:16:04.305 [2024-07-14 09:25:48.599530] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:05.238 Initializing NVMe Controllers 00:16:05.238 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:05.238 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:05.238 Initialization complete. Launching workers. 00:16:05.238 submit (in ns) avg, min, max = 5454.1, 3508.9, 4016523.3 00:16:05.238 complete (in ns) avg, min, max = 27573.1, 2067.8, 4016421.1 00:16:05.238 00:16:05.238 Submit histogram 00:16:05.238 ================ 00:16:05.238 Range in us Cumulative Count 00:16:05.238 3.508 - 3.532: 0.4373% ( 58) 00:16:05.238 3.532 - 3.556: 1.5758% ( 151) 00:16:05.238 3.556 - 3.579: 4.1770% ( 345) 00:16:05.238 3.579 - 3.603: 9.6660% ( 728) 00:16:05.238 3.603 - 3.627: 17.8089% ( 1080) 00:16:05.238 3.627 - 3.650: 27.7011% ( 1312) 00:16:05.238 3.650 - 3.674: 36.2814% ( 1138) 00:16:05.238 3.674 - 3.698: 44.0097% ( 1025) 00:16:05.238 3.698 - 3.721: 51.4966% ( 993) 00:16:05.238 3.721 - 3.745: 57.4304% ( 787) 00:16:05.238 3.745 - 3.769: 62.2182% ( 635) 00:16:05.238 3.769 - 3.793: 66.3425% ( 547) 00:16:05.238 3.793 - 3.816: 69.4413% ( 411) 00:16:05.238 3.816 - 3.840: 72.5326% ( 410) 00:16:05.238 3.840 - 3.864: 75.9406% ( 452) 00:16:05.238 3.864 - 3.887: 79.6803% ( 496) 00:16:05.238 3.887 - 3.911: 82.8546% ( 421) 00:16:05.238 3.911 - 3.935: 85.6971% ( 377) 00:16:05.238 3.935 - 3.959: 87.7328% ( 270) 00:16:05.238 3.959 - 3.982: 89.4217% ( 224) 00:16:05.238 3.982 - 4.006: 91.0126% ( 211) 00:16:05.238 4.006 - 4.030: 92.3471% ( 177) 00:16:05.238 4.030 - 4.053: 93.3575% ( 134) 00:16:05.238 4.053 - 4.077: 94.2999% ( 125) 00:16:05.238 4.077 - 4.101: 95.0162% ( 95) 00:16:05.238 4.101 - 4.124: 95.5666% ( 73) 00:16:05.238 4.124 - 4.148: 95.9587% ( 52) 00:16:05.238 4.148 - 4.172: 96.2226% ( 35) 00:16:05.238 4.172 - 4.196: 96.4111% ( 25) 00:16:05.238 4.196 - 4.219: 96.5317% ( 16) 00:16:05.238 4.219 - 4.243: 96.6222% ( 12) 00:16:05.238 4.243 - 4.267: 96.6900% ( 9) 00:16:05.238 4.267 - 4.290: 96.7353% ( 6) 00:16:05.238 4.290 - 4.314: 96.8031% ( 9) 00:16:05.238 4.314 - 4.338: 96.9012% ( 13) 00:16:05.238 4.338 - 4.361: 96.9841% ( 11) 00:16:05.238 4.361 - 4.385: 97.0369% ( 7) 00:16:05.238 4.385 - 4.409: 97.0444% ( 1) 00:16:05.238 4.409 - 4.433: 97.0670% ( 3) 00:16:05.238 4.433 - 4.456: 97.0821% ( 2) 00:16:05.238 4.480 - 4.504: 97.0896% ( 1) 00:16:05.238 4.504 - 4.527: 97.0972% ( 1) 00:16:05.238 4.551 - 4.575: 97.1123% ( 2) 00:16:05.238 4.622 - 4.646: 97.1198% ( 1) 00:16:05.238 4.670 - 4.693: 97.1349% ( 2) 00:16:05.238 4.693 - 4.717: 97.1726% ( 5) 00:16:05.238 4.717 - 4.741: 97.1952% ( 3) 00:16:05.238 4.741 - 4.764: 97.2404% ( 6) 00:16:05.238 4.764 - 4.788: 97.2706% ( 4) 00:16:05.238 4.788 - 4.812: 97.3158% ( 6) 00:16:05.238 4.812 - 4.836: 97.3611% ( 6) 00:16:05.238 4.836 - 4.859: 97.3988% ( 5) 00:16:05.238 4.859 - 4.883: 97.4139% ( 2) 00:16:05.238 4.883 - 4.907: 97.4440% ( 4) 00:16:05.238 4.907 - 4.930: 97.5270% ( 11) 00:16:05.238 4.930 - 4.954: 97.5797% ( 7) 00:16:05.238 4.954 - 4.978: 97.6024% ( 3) 00:16:05.238 4.978 - 5.001: 97.6401% ( 5) 00:16:05.238 5.001 - 5.025: 97.7154% ( 10) 00:16:05.238 5.025 - 5.049: 97.7456% ( 4) 00:16:05.238 5.049 - 5.073: 97.7833% ( 5) 00:16:05.238 5.073 - 5.096: 97.8285% ( 6) 00:16:05.238 5.096 - 5.120: 97.8512% ( 3) 00:16:05.238 5.120 - 5.144: 97.8587% ( 1) 00:16:05.238 5.144 - 5.167: 97.8738% ( 2) 00:16:05.238 5.167 - 5.191: 97.8813% ( 1) 00:16:05.238 5.191 - 5.215: 97.9039% ( 3) 00:16:05.238 5.215 - 5.239: 97.9492% ( 6) 00:16:05.238 5.239 - 5.262: 97.9793% ( 4) 00:16:05.238 5.262 - 5.286: 98.0020% ( 3) 00:16:05.238 5.286 - 5.310: 98.0095% ( 1) 00:16:05.238 5.310 - 5.333: 98.0170% ( 1) 00:16:05.238 5.333 - 5.357: 98.0321% ( 2) 00:16:05.238 5.357 - 5.381: 98.0472% ( 2) 00:16:05.238 5.381 - 5.404: 98.0547% ( 1) 00:16:05.238 5.404 - 5.428: 98.1000% ( 6) 00:16:05.238 5.452 - 5.476: 98.1151% ( 2) 00:16:05.238 5.499 - 5.523: 98.1452% ( 4) 00:16:05.238 5.523 - 5.547: 98.1528% ( 1) 00:16:05.238 5.594 - 5.618: 98.1754% ( 3) 00:16:05.238 5.618 - 5.641: 98.1829% ( 1) 00:16:05.238 5.665 - 5.689: 98.1905% ( 1) 00:16:05.238 5.689 - 5.713: 98.1980% ( 1) 00:16:05.238 5.713 - 5.736: 98.2055% ( 1) 00:16:05.238 5.807 - 5.831: 98.2131% ( 1) 00:16:05.238 5.831 - 5.855: 98.2206% ( 1) 00:16:05.238 5.855 - 5.879: 98.2357% ( 2) 00:16:05.238 5.950 - 5.973: 98.2432% ( 1) 00:16:05.238 5.997 - 6.021: 98.2583% ( 2) 00:16:05.238 6.044 - 6.068: 98.2734% ( 2) 00:16:05.238 6.068 - 6.116: 98.2809% ( 1) 00:16:05.238 6.116 - 6.163: 98.2885% ( 1) 00:16:05.238 6.210 - 6.258: 98.2960% ( 1) 00:16:05.238 6.258 - 6.305: 98.3036% ( 1) 00:16:05.238 6.447 - 6.495: 98.3111% ( 1) 00:16:05.238 6.495 - 6.542: 98.3186% ( 1) 00:16:05.238 6.827 - 6.874: 98.3337% ( 2) 00:16:05.238 6.874 - 6.921: 98.3413% ( 1) 00:16:05.238 6.921 - 6.969: 98.3488% ( 1) 00:16:05.238 6.969 - 7.016: 98.3563% ( 1) 00:16:05.238 7.064 - 7.111: 98.3639% ( 1) 00:16:05.238 7.111 - 7.159: 98.3714% ( 1) 00:16:05.238 7.206 - 7.253: 98.3789% ( 1) 00:16:05.238 7.253 - 7.301: 98.3865% ( 1) 00:16:05.238 7.348 - 7.396: 98.4016% ( 2) 00:16:05.238 7.396 - 7.443: 98.4166% ( 2) 00:16:05.238 7.443 - 7.490: 98.4317% ( 2) 00:16:05.238 7.538 - 7.585: 98.4393% ( 1) 00:16:05.238 7.585 - 7.633: 98.4468% ( 1) 00:16:05.238 7.633 - 7.680: 98.4543% ( 1) 00:16:05.238 7.680 - 7.727: 98.4619% ( 1) 00:16:05.238 7.727 - 7.775: 98.4694% ( 1) 00:16:05.238 7.775 - 7.822: 98.4920% ( 3) 00:16:05.238 7.917 - 7.964: 98.5222% ( 4) 00:16:05.238 8.012 - 8.059: 98.5448% ( 3) 00:16:05.238 8.059 - 8.107: 98.5524% ( 1) 00:16:05.239 8.107 - 8.154: 98.5599% ( 1) 00:16:05.239 8.154 - 8.201: 98.5674% ( 1) 00:16:05.239 8.201 - 8.249: 98.5825% ( 2) 00:16:05.239 8.249 - 8.296: 98.5901% ( 1) 00:16:05.239 8.296 - 8.344: 98.5976% ( 1) 00:16:05.239 8.344 - 8.391: 98.6127% ( 2) 00:16:05.239 8.391 - 8.439: 98.6202% ( 1) 00:16:05.239 8.439 - 8.486: 98.6353% ( 2) 00:16:05.239 8.581 - 8.628: 98.6428% ( 1) 00:16:05.239 8.676 - 8.723: 98.6504% ( 1) 00:16:05.239 8.770 - 8.818: 98.6579% ( 1) 00:16:05.239 8.913 - 8.960: 98.6655% ( 1) 00:16:05.239 8.960 - 9.007: 98.6805% ( 2) 00:16:05.239 9.007 - 9.055: 98.6956% ( 2) 00:16:05.239 9.292 - 9.339: 98.7107% ( 2) 00:16:05.239 9.481 - 9.529: 98.7182% ( 1) 00:16:05.239 9.529 - 9.576: 98.7258% ( 1) 00:16:05.239 9.671 - 9.719: 98.7333% ( 1) 00:16:05.239 10.145 - 10.193: 98.7409% ( 1) 00:16:05.239 10.240 - 10.287: 98.7484% ( 1) 00:16:05.239 10.335 - 10.382: 98.7559% ( 1) 00:16:05.239 10.430 - 10.477: 98.7710% ( 2) 00:16:05.239 10.524 - 10.572: 98.7786% ( 1) 00:16:05.239 10.619 - 10.667: 98.7861% ( 1) 00:16:05.239 10.714 - 10.761: 98.8087% ( 3) 00:16:05.239 10.809 - 10.856: 98.8163% ( 1) 00:16:05.239 10.856 - 10.904: 98.8313% ( 2) 00:16:05.239 10.904 - 10.951: 98.8389% ( 1) 00:16:05.239 10.951 - 10.999: 98.8464% ( 1) 00:16:05.239 11.188 - 11.236: 98.8540% ( 1) 00:16:05.239 11.378 - 11.425: 98.8615% ( 1) 00:16:05.239 11.567 - 11.615: 98.8690% ( 1) 00:16:05.239 11.662 - 11.710: 98.8766% ( 1) 00:16:05.239 12.231 - 12.326: 98.8841% ( 1) 00:16:05.239 12.326 - 12.421: 98.9067% ( 3) 00:16:05.239 12.421 - 12.516: 98.9143% ( 1) 00:16:05.239 12.610 - 12.705: 98.9294% ( 2) 00:16:05.239 12.800 - 12.895: 98.9369% ( 1) 00:16:05.239 12.895 - 12.990: 98.9520% ( 2) 00:16:05.239 12.990 - 13.084: 98.9671% ( 2) 00:16:05.239 13.179 - 13.274: 98.9746% ( 1) 00:16:05.239 13.274 - 13.369: 98.9821% ( 1) 00:16:05.239 13.653 - 13.748: 98.9897% ( 1) 00:16:05.239 13.748 - 13.843: 98.9972% ( 1) 00:16:05.239 14.127 - 14.222: 99.0048% ( 1) 00:16:05.239 14.222 - 14.317: 99.0123% ( 1) 00:16:05.239 14.412 - 14.507: 99.0198% ( 1) 00:16:05.239 14.507 - 14.601: 99.0274% ( 1) 00:16:05.239 14.791 - 14.886: 99.0424% ( 2) 00:16:05.239 14.886 - 14.981: 99.0500% ( 1) 00:16:05.239 14.981 - 15.076: 99.0575% ( 1) 00:16:05.239 17.256 - 17.351: 99.0801% ( 3) 00:16:05.239 17.351 - 17.446: 99.1028% ( 3) 00:16:05.239 17.446 - 17.541: 99.1329% ( 4) 00:16:05.239 17.541 - 17.636: 99.1932% ( 8) 00:16:05.239 17.636 - 17.730: 99.2234% ( 4) 00:16:05.239 17.730 - 17.825: 99.2686% ( 6) 00:16:05.239 17.825 - 17.920: 99.2988% ( 4) 00:16:05.239 17.920 - 18.015: 99.3139% ( 2) 00:16:05.239 18.015 - 18.110: 99.3516% ( 5) 00:16:05.239 18.110 - 18.204: 99.4571% ( 14) 00:16:05.239 18.204 - 18.299: 99.5325% ( 10) 00:16:05.239 18.299 - 18.394: 99.5853% ( 7) 00:16:05.239 18.394 - 18.489: 99.6381% ( 7) 00:16:05.239 18.489 - 18.584: 99.6683% ( 4) 00:16:05.239 18.584 - 18.679: 99.6833% ( 2) 00:16:05.239 18.679 - 18.773: 99.7210% ( 5) 00:16:05.239 18.773 - 18.868: 99.7813% ( 8) 00:16:05.239 18.963 - 19.058: 99.7964% ( 2) 00:16:05.239 19.058 - 19.153: 99.8040% ( 1) 00:16:05.239 19.153 - 19.247: 99.8417% ( 5) 00:16:05.239 19.247 - 19.342: 99.8567% ( 2) 00:16:05.239 19.342 - 19.437: 99.8643% ( 1) 00:16:05.239 19.437 - 19.532: 99.8944% ( 4) 00:16:05.239 19.532 - 19.627: 99.9020% ( 1) 00:16:05.239 19.627 - 19.721: 99.9171% ( 2) 00:16:05.239 19.721 - 19.816: 99.9246% ( 1) 00:16:05.239 19.911 - 20.006: 99.9397% ( 2) 00:16:05.239 20.196 - 20.290: 99.9472% ( 1) 00:16:05.239 20.575 - 20.670: 99.9548% ( 1) 00:16:05.239 21.807 - 21.902: 99.9623% ( 1) 00:16:05.239 3980.705 - 4004.978: 99.9925% ( 4) 00:16:05.239 4004.978 - 4029.250: 100.0000% ( 1) 00:16:05.239 00:16:05.239 Complete histogram 00:16:05.239 ================== 00:16:05.239 Range in us Cumulative Count 00:16:05.239 2.062 - 2.074: 0.1885% ( 25) 00:16:05.239 2.074 - 2.086: 21.6316% ( 2844) 00:16:05.239 2.086 - 2.098: 41.1898% ( 2594) 00:16:05.239 2.098 - 2.110: 45.6005% ( 585) 00:16:05.239 2.110 - 2.121: 56.5558% ( 1453) 00:16:05.239 2.121 - 2.133: 61.1702% ( 612) 00:16:05.239 2.133 - 2.145: 64.1258% ( 392) 00:16:05.239 2.145 - 2.157: 75.0132% ( 1444) 00:16:05.239 2.157 - 2.169: 80.0121% ( 663) 00:16:05.239 2.169 - 2.181: 82.0855% ( 275) 00:16:05.239 2.181 - 2.193: 85.9911% ( 518) 00:16:05.239 2.193 - 2.204: 87.5292% ( 204) 00:16:05.239 2.204 - 2.216: 88.4038% ( 116) 00:16:05.239 2.216 - 2.228: 90.1455% ( 231) 00:16:05.239 2.228 - 2.240: 91.4197% ( 169) 00:16:05.239 2.240 - 2.252: 93.4253% ( 266) 00:16:05.239 2.252 - 2.264: 94.3979% ( 129) 00:16:05.239 2.264 - 2.276: 94.7071% ( 41) 00:16:05.239 2.276 - 2.287: 95.0011% ( 39) 00:16:05.239 2.287 - 2.299: 95.1293% ( 17) 00:16:05.239 2.299 - 2.311: 95.2801% ( 20) 00:16:05.239 2.311 - 2.323: 95.5591% ( 37) 00:16:05.239 2.323 - 2.335: 95.6873% ( 17) 00:16:05.239 2.335 - 2.347: 95.7626% ( 10) 00:16:05.239 2.347 - 2.359: 95.7928% ( 4) 00:16:05.239 2.359 - 2.370: 95.9361% ( 19) 00:16:05.239 2.370 - 2.382: 96.1170% ( 24) 00:16:05.239 2.382 - 2.394: 96.4111% ( 39) 00:16:05.239 2.394 - 2.406: 96.6825% ( 36) 00:16:05.239 2.406 - 2.418: 96.9238% ( 32) 00:16:05.239 2.418 - 2.430: 97.1877% ( 35) 00:16:05.239 2.430 - 2.441: 97.4214% ( 31) 00:16:05.239 2.441 - 2.453: 97.5722% ( 20) 00:16:05.239 2.453 - 2.465: 97.7305% ( 21) 00:16:05.239 2.465 - 2.477: 97.8662% ( 18) 00:16:05.239 2.477 - 2.489: 97.9416% ( 10) 00:16:05.239 2.489 - 2.501: 97.9944% ( 7) 00:16:05.239 2.501 - 2.513: 98.0170% ( 3) 00:16:05.239 2.513 - 2.524: 98.0547% ( 5) 00:16:05.239 2.524 - 2.536: 98.0623% ( 1) 00:16:05.239 2.536 - 2.548: 98.0774% ( 2) 00:16:05.239 2.548 - 2.560: 98.0924% ( 2) 00:16:05.239 2.560 - 2.572: 98.1075% ( 2) 00:16:05.239 2.572 - 2.584: 98.1151% ( 1) 00:16:05.239 2.584 - 2.596: 98.1301% ( 2) 00:16:05.239 2.596 - 2.607: 98.1377% ( 1) 00:16:05.239 2.667 - 2.679: 98.1452% ( 1) 00:16:05.239 2.679 - 2.690: 98.1603% ( 2) 00:16:05.239 2.702 - 2.714: 98.1678% ( 1) 00:16:05.239 2.714 - 2.726: 98.1754% ( 1) 00:16:05.239 2.773 - 2.785: 98.1905% ( 2) 00:16:05.239 2.785 - 2.797: 98.2055% ( 2) 00:16:05.239 2.809 - 2.821: 98.2206% ( 2) 00:16:05.239 2.844 - 2.856: 98.2282% ( 1) 00:16:05.239 2.856 - 2.868: 98.2357% ( 1) 00:16:05.239 2.880 - 2.892: 98.2432% ( 1) 00:16:05.239 2.892 - 2.904: 98.2508% ( 1) 00:16:05.239 2.939 - 2.951: 98.2583% ( 1) 00:16:05.239 2.951 - 2.963: 98.2734% ( 2) 00:16:05.239 3.058 - 3.081: 98.2960% ( 3) 00:16:05.239 3.081 - 3.105: 98.3036% ( 1) 00:16:05.239 3.105 - 3.129: 98.3111% ( 1) 00:16:05.239 3.129 - 3.153: 98.3186% ( 1) 00:16:05.239 3.153 - 3.176: 98.3262% ( 1) 00:16:05.239 3.176 - 3.200: 98.3337% ( 1) 00:16:05.239 3.200 - 3.224: 98.3563% ( 3) 00:16:05.239 3.224 - 3.247: 98.3639% ( 1) 00:16:05.239 3.247 - 3.271: 98.4016% ( 5) 00:16:05.239 3.295 - 3.319: 98.4091% ( 1) 00:16:05.239 3.342 - 3.366: 98.4166% ( 1) 00:16:05.239 3.366 - 3.390: 98.4242% ( 1) 00:16:05.239 3.390 - 3.413: 98.4393% ( 2) 00:16:05.239 3.413 - 3.437: 98.4468% ( 1) 00:16:05.239 3.437 - 3.461: 98.4619% ( 2) 00:16:05.239 3.461 - 3.484: 98.4845% ( 3) 00:16:05.239 3.484 - 3.508: 98.4920% ( 1) 00:16:05.239 3.508 - 3.532: 98.5147% ( 3) 00:16:05.239 3.532 - 3.556: 98.5222% ( 1) 00:16:05.239 3.579 - 3.603: 98.5297% ( 1) 00:16:05.239 3.603 - 3.627: 98.5599% ( 4) 00:16:05.239 3.627 - 3.650: 98.5825% ( 3) 00:16:05.239 3.650 - 3.674: 98.5901% ( 1) 00:16:05.239 3.698 - 3.721: 98.6127% ( 3) 00:16:05.239 3.745 - 3.769: 98.6202% ( 1) 00:16:05.239 3.769 - 3.793: 98.6353% ( 2) 00:16:05.239 3.793 - 3.816: 98.6504% ( 2) 00:16:05.239 3.840 - 3.864: 98.6655% ( 2) 00:16:05.239 3.864 - 3.887: 98.6730% ( 1) 00:16:05.239 3.887 - 3.911: 98.6805% ( 1) 00:16:05.239 3.911 - 3.935: 98.7032% ( 3) 00:16:05.239 4.053 - 4.077: 98.7107% ( 1) 00:16:05.239 5.333 - 5.357: 98.7182% ( 1) 00:16:05.239 5.357 - 5.381: 98.7258% ( 1) 00:16:05.239 5.404 - 5.428: 98.7333% ( 1) 00:16:05.239 5.476 - 5.499: 98.7409% ( 1) 00:16:05.239 5.736 - 5.760: 98.7484% ( 1) 00:16:05.239 5.760 - 5.784: 98.7635% ( 2) 00:16:05.239 5.926 - 5.950: 98.7710% ( 1) 00:16:05.239 6.068 - 6.116: 98.7786% ( 1) 00:16:05.239 6.210 - 6.258: 98.7936% ( 2) 00:16:05.239 6.447 - 6.495: 98.8012% ( 1) 00:16:05.239 6.542 - 6.590: 98.8087% ( 1) 00:16:05.239 6.590 - 6.637: 98.8163% ( 1) 00:16:05.239 6.637 - 6.684: 98.8238% ( 1) 00:16:05.239 6.827 - 6.874: 98.8313% ( 1) 00:16:05.239 6.921 - 6.969: 98.8389% ( 1) 00:16:05.239 9.813 - 9.861: 98.8464% ( 1) 00:16:05.239 9.956 - 10.003: 98.8540% ( 1) 00:16:05.239 12.421 - 12.516: 98.8615% ( 1) 00:16:05.239 14.981 - 15.076: 98.8690% ( 1) 00:16:05.239 15.550 - 15.644: 98.8766% ( 1) 00:16:05.240 15.644 - 15.739: 98.8841% ( 1) 00:16:05.240 15.739 - 15.834: 98.8917% ( 1) 00:16:05.240 15.929 - 16.024: 98.9067% ( 2) 00:16:05.240 16.024 - 16.119: 98.9369% ( 4) 00:16:05.240 16.119 - 16.213: 98.9444% ( 1) 00:16:05.240 16.213 - 16.308: 98.9821% ( 5) 00:16:05.240 16.308 - 16.403: 99.0123% ( 4) 00:16:05.240 16.403 - 16.498: 99.0726% ( 8) 00:16:05.240 16.498 - 16.593: 99.1178% ( 6) 00:16:05.240 16.593 - 16.687: 99.1329% ( 2) 00:16:05.240 16.687 - 16.782: 99.1555% ( 3) 00:16:05.240 16.782 - 16.877: 99.1782% ( 3) 00:16:05.240 16.877 - 16.972: 99.1932% ( 2) 00:16:05.240 16.972 - 17.067: 99.2159% ( 3) 00:16:05.240 17.067 - 17.161: 99.2385% ( 3) 00:16:05.240 17.161 - 17.256: 99.2460% ( 1) 00:16:05.240 17.256 - 17.351: 99.2611% ( 2) 00:16:05.240 17.351 - 17.446: 99.2762% ( 2) 00:16:05.240 17.446 - 17.541: 99.2837% ( 1) 00:16:05.240 17.541 - 17.636: 99.2913% ( 1) 00:16:05.240 17.636 - 17.730: 99.2988% ( 1) 00:16:05.240 17.825 - 17.920: 99.3063% ( 1) 00:16:05.240 18.015 - 18.110: 99.3139% ( 1) 00:16:05.240 18.110 - 18.204: 99.3214% ( 1) 00:16:05.240 18.204 - 18.299: 99.3290% ( 1) 00:16:05.240 18.299 - 18.394: 99.3516% ( 3) 00:16:05.240 18.394 - 18.489: 99.3667% ( 2) 00:16:05.240 3980.705 - 4004.978: 99.7587% ( 52) 00:16:05.497 4004.978 - 4029.250: 100.0000%[2024-07-14 09:25:49.696218] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:05.497 ( 32) 00:16:05.497 00:16:05.497 09:25:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:05.497 09:25:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:05.497 09:25:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:05.497 09:25:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:05.497 09:25:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:05.781 [ 00:16:05.781 { 00:16:05.781 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:05.781 "subtype": "Discovery", 00:16:05.781 "listen_addresses": [], 00:16:05.781 "allow_any_host": true, 00:16:05.781 "hosts": [] 00:16:05.781 }, 00:16:05.781 { 00:16:05.781 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:05.781 "subtype": "NVMe", 00:16:05.781 "listen_addresses": [ 00:16:05.781 { 00:16:05.781 "trtype": "VFIOUSER", 00:16:05.781 "adrfam": "IPv4", 00:16:05.781 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:05.781 "trsvcid": "0" 00:16:05.781 } 00:16:05.781 ], 00:16:05.781 "allow_any_host": true, 00:16:05.781 "hosts": [], 00:16:05.781 "serial_number": "SPDK1", 00:16:05.781 "model_number": "SPDK bdev Controller", 00:16:05.781 "max_namespaces": 32, 00:16:05.781 "min_cntlid": 1, 00:16:05.781 "max_cntlid": 65519, 00:16:05.781 "namespaces": [ 00:16:05.781 { 00:16:05.781 "nsid": 1, 00:16:05.781 "bdev_name": "Malloc1", 00:16:05.781 "name": "Malloc1", 00:16:05.781 "nguid": "72E04618E719425D9486CD4295638238", 00:16:05.781 "uuid": "72e04618-e719-425d-9486-cd4295638238" 00:16:05.781 }, 00:16:05.781 { 00:16:05.781 "nsid": 2, 00:16:05.781 "bdev_name": "Malloc3", 00:16:05.781 "name": "Malloc3", 00:16:05.781 "nguid": "23258149379C466187EE93E7DFA268BF", 00:16:05.781 "uuid": "23258149-379c-4661-87ee-93e7dfa268bf" 00:16:05.781 } 00:16:05.781 ] 00:16:05.781 }, 00:16:05.781 { 00:16:05.781 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:05.781 "subtype": "NVMe", 00:16:05.781 "listen_addresses": [ 00:16:05.781 { 00:16:05.781 "trtype": "VFIOUSER", 00:16:05.781 "adrfam": "IPv4", 00:16:05.781 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:05.781 "trsvcid": "0" 00:16:05.781 } 00:16:05.781 ], 00:16:05.781 "allow_any_host": true, 00:16:05.781 "hosts": [], 00:16:05.781 "serial_number": "SPDK2", 00:16:05.781 "model_number": "SPDK bdev Controller", 00:16:05.781 "max_namespaces": 32, 00:16:05.781 "min_cntlid": 1, 00:16:05.781 "max_cntlid": 65519, 00:16:05.781 "namespaces": [ 00:16:05.781 { 00:16:05.781 "nsid": 1, 00:16:05.781 "bdev_name": "Malloc2", 00:16:05.781 "name": "Malloc2", 00:16:05.781 "nguid": "B1B83843791D46559569545530C1952A", 00:16:05.781 "uuid": "b1b83843-791d-4655-9569-545530c1952a" 00:16:05.781 } 00:16:05.781 ] 00:16:05.781 } 00:16:05.781 ] 00:16:05.781 09:25:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:05.781 09:25:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=706884 00:16:05.781 09:25:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:05.781 09:25:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:05.781 09:25:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:16:05.781 09:25:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:05.781 09:25:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:05.781 09:25:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:16:05.781 09:25:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:05.781 09:25:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:05.781 EAL: No free 2048 kB hugepages reported on node 1 00:16:05.781 [2024-07-14 09:25:50.190401] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:06.040 Malloc4 00:16:06.040 09:25:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:06.298 [2024-07-14 09:25:50.553012] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:06.298 09:25:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:06.298 Asynchronous Event Request test 00:16:06.298 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:06.298 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:06.298 Registering asynchronous event callbacks... 00:16:06.298 Starting namespace attribute notice tests for all controllers... 00:16:06.298 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:06.298 aer_cb - Changed Namespace 00:16:06.298 Cleaning up... 00:16:06.557 [ 00:16:06.557 { 00:16:06.557 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:06.557 "subtype": "Discovery", 00:16:06.557 "listen_addresses": [], 00:16:06.557 "allow_any_host": true, 00:16:06.557 "hosts": [] 00:16:06.557 }, 00:16:06.557 { 00:16:06.557 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:06.557 "subtype": "NVMe", 00:16:06.557 "listen_addresses": [ 00:16:06.557 { 00:16:06.557 "trtype": "VFIOUSER", 00:16:06.557 "adrfam": "IPv4", 00:16:06.557 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:06.557 "trsvcid": "0" 00:16:06.557 } 00:16:06.557 ], 00:16:06.557 "allow_any_host": true, 00:16:06.557 "hosts": [], 00:16:06.557 "serial_number": "SPDK1", 00:16:06.557 "model_number": "SPDK bdev Controller", 00:16:06.557 "max_namespaces": 32, 00:16:06.557 "min_cntlid": 1, 00:16:06.557 "max_cntlid": 65519, 00:16:06.557 "namespaces": [ 00:16:06.557 { 00:16:06.557 "nsid": 1, 00:16:06.557 "bdev_name": "Malloc1", 00:16:06.557 "name": "Malloc1", 00:16:06.557 "nguid": "72E04618E719425D9486CD4295638238", 00:16:06.557 "uuid": "72e04618-e719-425d-9486-cd4295638238" 00:16:06.557 }, 00:16:06.557 { 00:16:06.557 "nsid": 2, 00:16:06.557 "bdev_name": "Malloc3", 00:16:06.557 "name": "Malloc3", 00:16:06.557 "nguid": "23258149379C466187EE93E7DFA268BF", 00:16:06.557 "uuid": "23258149-379c-4661-87ee-93e7dfa268bf" 00:16:06.557 } 00:16:06.557 ] 00:16:06.557 }, 00:16:06.557 { 00:16:06.557 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:06.557 "subtype": "NVMe", 00:16:06.557 "listen_addresses": [ 00:16:06.557 { 00:16:06.557 "trtype": "VFIOUSER", 00:16:06.557 "adrfam": "IPv4", 00:16:06.557 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:06.557 "trsvcid": "0" 00:16:06.557 } 00:16:06.557 ], 00:16:06.557 "allow_any_host": true, 00:16:06.557 "hosts": [], 00:16:06.557 "serial_number": "SPDK2", 00:16:06.558 "model_number": "SPDK bdev Controller", 00:16:06.558 "max_namespaces": 32, 00:16:06.558 "min_cntlid": 1, 00:16:06.558 "max_cntlid": 65519, 00:16:06.558 "namespaces": [ 00:16:06.558 { 00:16:06.558 "nsid": 1, 00:16:06.558 "bdev_name": "Malloc2", 00:16:06.558 "name": "Malloc2", 00:16:06.558 "nguid": "B1B83843791D46559569545530C1952A", 00:16:06.558 "uuid": "b1b83843-791d-4655-9569-545530c1952a" 00:16:06.558 }, 00:16:06.558 { 00:16:06.558 "nsid": 2, 00:16:06.558 "bdev_name": "Malloc4", 00:16:06.558 "name": "Malloc4", 00:16:06.558 "nguid": "D27D6C3A025F43C8BA86DC4E0188C28C", 00:16:06.558 "uuid": "d27d6c3a-025f-43c8-ba86-dc4e0188c28c" 00:16:06.558 } 00:16:06.558 ] 00:16:06.558 } 00:16:06.558 ] 00:16:06.558 09:25:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 706884 00:16:06.558 09:25:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:06.558 09:25:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 701293 00:16:06.558 09:25:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 701293 ']' 00:16:06.558 09:25:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 701293 00:16:06.558 09:25:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:16:06.558 09:25:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:06.558 09:25:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 701293 00:16:06.558 09:25:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:06.558 09:25:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:06.558 09:25:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 701293' 00:16:06.558 killing process with pid 701293 00:16:06.558 09:25:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 701293 00:16:06.558 09:25:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 701293 00:16:06.815 09:25:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:06.815 09:25:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:06.815 09:25:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:06.815 09:25:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:06.815 09:25:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:06.815 09:25:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=707027 00:16:06.815 09:25:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:06.815 09:25:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 707027' 00:16:06.815 Process pid: 707027 00:16:06.815 09:25:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:06.815 09:25:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 707027 00:16:06.815 09:25:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 707027 ']' 00:16:06.815 09:25:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.815 09:25:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:06.815 09:25:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:06.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:06.815 09:25:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:06.815 09:25:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:06.815 [2024-07-14 09:25:51.233524] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:06.815 [2024-07-14 09:25:51.234581] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:16:06.815 [2024-07-14 09:25:51.234640] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:06.815 EAL: No free 2048 kB hugepages reported on node 1 00:16:07.072 [2024-07-14 09:25:51.294477] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:07.072 [2024-07-14 09:25:51.383052] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:07.072 [2024-07-14 09:25:51.383116] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:07.072 [2024-07-14 09:25:51.383145] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:07.073 [2024-07-14 09:25:51.383156] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:07.073 [2024-07-14 09:25:51.383167] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:07.073 [2024-07-14 09:25:51.383299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:07.073 [2024-07-14 09:25:51.383365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:07.073 [2024-07-14 09:25:51.383431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:07.073 [2024-07-14 09:25:51.383433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.073 [2024-07-14 09:25:51.490482] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:16:07.073 [2024-07-14 09:25:51.490730] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:16:07.073 [2024-07-14 09:25:51.491028] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:16:07.073 [2024-07-14 09:25:51.491639] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:07.073 [2024-07-14 09:25:51.491897] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:16:07.073 09:25:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:07.073 09:25:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:16:07.073 09:25:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:08.443 09:25:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:08.443 09:25:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:08.443 09:25:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:08.443 09:25:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:08.443 09:25:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:08.443 09:25:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:08.700 Malloc1 00:16:08.700 09:25:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:08.962 09:25:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:09.223 09:25:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:09.480 09:25:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:09.480 09:25:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:09.480 09:25:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:09.738 Malloc2 00:16:09.738 09:25:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:09.995 09:25:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:10.253 09:25:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:10.511 09:25:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:10.512 09:25:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 707027 00:16:10.512 09:25:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 707027 ']' 00:16:10.512 09:25:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 707027 00:16:10.512 09:25:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:16:10.512 09:25:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:10.512 09:25:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 707027 00:16:10.512 09:25:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:10.512 09:25:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:10.512 09:25:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 707027' 00:16:10.512 killing process with pid 707027 00:16:10.512 09:25:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 707027 00:16:10.512 09:25:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 707027 00:16:10.770 09:25:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:10.770 09:25:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:10.770 00:16:10.770 real 0m52.666s 00:16:10.770 user 3m28.082s 00:16:10.770 sys 0m4.486s 00:16:10.770 09:25:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:10.770 09:25:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:10.770 ************************************ 00:16:10.770 END TEST nvmf_vfio_user 00:16:10.770 ************************************ 00:16:10.770 09:25:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:10.770 09:25:55 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:10.770 09:25:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:10.770 09:25:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:10.770 09:25:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:10.770 ************************************ 00:16:10.770 START TEST nvmf_vfio_user_nvme_compliance 00:16:10.770 ************************************ 00:16:10.770 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:10.770 * Looking for test storage... 00:16:10.770 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:10.770 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:10.770 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:11.030 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:11.030 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:11.030 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:11.030 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:11.030 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:11.030 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:11.030 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:11.030 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:11.030 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:11.030 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:11.030 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:11.030 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:11.030 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:11.030 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:11.030 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:11.030 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:11.030 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:11.030 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:11.030 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:11.030 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:11.030 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.030 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.030 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.030 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:11.030 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.030 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:16:11.030 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:11.030 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:11.030 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:11.030 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:11.030 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:11.030 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:11.030 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:11.030 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:11.030 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:11.030 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:11.030 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:11.030 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:11.030 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:11.030 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=707621 00:16:11.030 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:11.030 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 707621' 00:16:11.030 Process pid: 707621 00:16:11.030 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:11.030 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 707621 00:16:11.030 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 707621 ']' 00:16:11.030 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:11.030 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:11.030 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:11.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:11.030 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:11.030 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:11.030 [2024-07-14 09:25:55.287178] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:16:11.030 [2024-07-14 09:25:55.287258] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:11.030 EAL: No free 2048 kB hugepages reported on node 1 00:16:11.030 [2024-07-14 09:25:55.348033] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:11.030 [2024-07-14 09:25:55.433075] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:11.030 [2024-07-14 09:25:55.433130] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:11.030 [2024-07-14 09:25:55.433160] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:11.030 [2024-07-14 09:25:55.433171] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:11.030 [2024-07-14 09:25:55.433187] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:11.030 [2024-07-14 09:25:55.433267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:11.030 [2024-07-14 09:25:55.433289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:11.030 [2024-07-14 09:25:55.433292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.289 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:11.289 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:16:11.289 09:25:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:16:12.224 09:25:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:12.224 09:25:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:12.224 09:25:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:12.224 09:25:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.224 09:25:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:12.224 09:25:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.224 09:25:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:12.224 09:25:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:12.224 09:25:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.224 09:25:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:12.224 malloc0 00:16:12.224 09:25:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.224 09:25:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:12.224 09:25:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.224 09:25:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:12.224 09:25:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.224 09:25:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:12.224 09:25:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.224 09:25:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:12.224 09:25:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.224 09:25:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:12.224 09:25:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.224 09:25:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:12.224 09:25:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.224 09:25:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:12.224 EAL: No free 2048 kB hugepages reported on node 1 00:16:12.482 00:16:12.482 00:16:12.482 CUnit - A unit testing framework for C - Version 2.1-3 00:16:12.482 http://cunit.sourceforge.net/ 00:16:12.482 00:16:12.482 00:16:12.482 Suite: nvme_compliance 00:16:12.482 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-14 09:25:56.770405] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:12.482 [2024-07-14 09:25:56.771825] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:12.482 [2024-07-14 09:25:56.771864] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:12.482 [2024-07-14 09:25:56.771893] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:12.482 [2024-07-14 09:25:56.773422] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:12.482 passed 00:16:12.482 Test: admin_identify_ctrlr_verify_fused ...[2024-07-14 09:25:56.859006] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:12.482 [2024-07-14 09:25:56.862029] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:12.482 passed 00:16:12.740 Test: admin_identify_ns ...[2024-07-14 09:25:56.950794] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:12.740 [2024-07-14 09:25:57.007899] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:12.740 [2024-07-14 09:25:57.015884] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:12.740 [2024-07-14 09:25:57.037010] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:12.740 passed 00:16:12.740 Test: admin_get_features_mandatory_features ...[2024-07-14 09:25:57.119268] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:12.740 [2024-07-14 09:25:57.122288] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:12.740 passed 00:16:12.998 Test: admin_get_features_optional_features ...[2024-07-14 09:25:57.206821] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:12.998 [2024-07-14 09:25:57.212875] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:12.998 passed 00:16:12.998 Test: admin_set_features_number_of_queues ...[2024-07-14 09:25:57.294060] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:12.998 [2024-07-14 09:25:57.398985] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:12.998 passed 00:16:13.256 Test: admin_get_log_page_mandatory_logs ...[2024-07-14 09:25:57.485166] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:13.256 [2024-07-14 09:25:57.488206] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:13.256 passed 00:16:13.256 Test: admin_get_log_page_with_lpo ...[2024-07-14 09:25:57.569455] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:13.256 [2024-07-14 09:25:57.636880] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:13.256 [2024-07-14 09:25:57.649961] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:13.256 passed 00:16:13.514 Test: fabric_property_get ...[2024-07-14 09:25:57.733590] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:13.514 [2024-07-14 09:25:57.734891] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:13.514 [2024-07-14 09:25:57.736618] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:13.514 passed 00:16:13.514 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-14 09:25:57.822181] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:13.514 [2024-07-14 09:25:57.823471] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:13.514 [2024-07-14 09:25:57.825200] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:13.514 passed 00:16:13.514 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-14 09:25:57.909434] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:13.772 [2024-07-14 09:25:57.992877] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:13.772 [2024-07-14 09:25:58.008890] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:13.772 [2024-07-14 09:25:58.013966] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:13.772 passed 00:16:13.772 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-14 09:25:58.097555] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:13.772 [2024-07-14 09:25:58.098833] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:13.772 [2024-07-14 09:25:58.100580] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:13.772 passed 00:16:13.772 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-14 09:25:58.181782] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:14.030 [2024-07-14 09:25:58.258889] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:14.030 [2024-07-14 09:25:58.282889] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:14.030 [2024-07-14 09:25:58.287991] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:14.030 passed 00:16:14.030 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-14 09:25:58.371753] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:14.030 [2024-07-14 09:25:58.373084] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:14.030 [2024-07-14 09:25:58.373125] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:14.030 [2024-07-14 09:25:58.374776] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:14.030 passed 00:16:14.030 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-14 09:25:58.456326] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:14.295 [2024-07-14 09:25:58.551873] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:14.295 [2024-07-14 09:25:58.559877] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:14.295 [2024-07-14 09:25:58.567873] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:14.295 [2024-07-14 09:25:58.575877] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:14.295 [2024-07-14 09:25:58.604985] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:14.295 passed 00:16:14.295 Test: admin_create_io_sq_verify_pc ...[2024-07-14 09:25:58.687693] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:14.295 [2024-07-14 09:25:58.700892] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:14.295 [2024-07-14 09:25:58.718191] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:14.608 passed 00:16:14.608 Test: admin_create_io_qp_max_qps ...[2024-07-14 09:25:58.806780] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:15.542 [2024-07-14 09:25:59.916884] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:16:16.109 [2024-07-14 09:26:00.290342] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:16.109 passed 00:16:16.109 Test: admin_create_io_sq_shared_cq ...[2024-07-14 09:26:00.374485] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:16.109 [2024-07-14 09:26:00.506877] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:16.109 [2024-07-14 09:26:00.543962] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:16.368 passed 00:16:16.368 00:16:16.368 Run Summary: Type Total Ran Passed Failed Inactive 00:16:16.368 suites 1 1 n/a 0 0 00:16:16.368 tests 18 18 18 0 0 00:16:16.368 asserts 360 360 360 0 n/a 00:16:16.368 00:16:16.368 Elapsed time = 1.565 seconds 00:16:16.368 09:26:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 707621 00:16:16.368 09:26:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 707621 ']' 00:16:16.368 09:26:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 707621 00:16:16.368 09:26:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:16:16.368 09:26:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:16.368 09:26:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 707621 00:16:16.368 09:26:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:16.368 09:26:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:16.368 09:26:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 707621' 00:16:16.368 killing process with pid 707621 00:16:16.368 09:26:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 707621 00:16:16.368 09:26:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 707621 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:16.627 00:16:16.627 real 0m5.698s 00:16:16.627 user 0m16.051s 00:16:16.627 sys 0m0.540s 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:16.627 ************************************ 00:16:16.627 END TEST nvmf_vfio_user_nvme_compliance 00:16:16.627 ************************************ 00:16:16.627 09:26:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:16.627 09:26:00 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:16.627 09:26:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:16.627 09:26:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:16.627 09:26:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:16.627 ************************************ 00:16:16.627 START TEST nvmf_vfio_user_fuzz 00:16:16.627 ************************************ 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:16.627 * Looking for test storage... 00:16:16.627 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=708347 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 708347' 00:16:16.627 Process pid: 708347 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 708347 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 708347 ']' 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:16.627 09:26:00 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:16.886 09:26:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:16.886 09:26:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:16:16.886 09:26:01 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:18.260 09:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:18.260 09:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.260 09:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:18.260 09:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.260 09:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:18.260 09:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:18.260 09:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.260 09:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:18.260 malloc0 00:16:18.260 09:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.260 09:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:18.260 09:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.260 09:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:18.260 09:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.260 09:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:18.260 09:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.260 09:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:18.260 09:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.260 09:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:18.260 09:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.260 09:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:18.260 09:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.260 09:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:18.260 09:26:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:50.321 Fuzzing completed. Shutting down the fuzz application 00:16:50.321 00:16:50.321 Dumping successful admin opcodes: 00:16:50.321 8, 9, 10, 24, 00:16:50.321 Dumping successful io opcodes: 00:16:50.321 0, 00:16:50.321 NS: 0x200003a1ef00 I/O qp, Total commands completed: 603013, total successful commands: 2329, random_seed: 3014222208 00:16:50.321 NS: 0x200003a1ef00 admin qp, Total commands completed: 134882, total successful commands: 1087, random_seed: 3590780544 00:16:50.321 09:26:32 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:50.321 09:26:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.321 09:26:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:50.321 09:26:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.321 09:26:32 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 708347 00:16:50.321 09:26:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 708347 ']' 00:16:50.321 09:26:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 708347 00:16:50.321 09:26:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:16:50.321 09:26:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:50.321 09:26:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 708347 00:16:50.321 09:26:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:50.321 09:26:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:50.321 09:26:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 708347' 00:16:50.321 killing process with pid 708347 00:16:50.321 09:26:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 708347 00:16:50.321 09:26:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 708347 00:16:50.321 09:26:33 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:50.321 09:26:33 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:50.321 00:16:50.321 real 0m32.218s 00:16:50.321 user 0m31.134s 00:16:50.321 sys 0m30.455s 00:16:50.321 09:26:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:50.321 09:26:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:50.321 ************************************ 00:16:50.321 END TEST nvmf_vfio_user_fuzz 00:16:50.321 ************************************ 00:16:50.321 09:26:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:50.321 09:26:33 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:50.321 09:26:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:50.321 09:26:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:50.321 09:26:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:50.321 ************************************ 00:16:50.321 START TEST nvmf_host_management 00:16:50.321 ************************************ 00:16:50.321 09:26:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:50.321 * Looking for test storage... 00:16:50.321 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:50.321 09:26:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:50.321 09:26:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:16:50.321 09:26:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:50.321 09:26:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:50.321 09:26:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:50.321 09:26:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:50.321 09:26:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:50.321 09:26:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:50.321 09:26:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:50.321 09:26:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:50.321 09:26:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:50.321 09:26:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:50.321 09:26:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:50.321 09:26:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:50.321 09:26:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:50.321 09:26:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:50.321 09:26:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:50.321 09:26:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:50.322 09:26:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:50.322 09:26:33 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:50.322 09:26:33 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:50.322 09:26:33 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:50.322 09:26:33 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.322 09:26:33 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.322 09:26:33 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.322 09:26:33 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:16:50.322 09:26:33 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.322 09:26:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:16:50.322 09:26:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:50.322 09:26:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:50.322 09:26:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:50.322 09:26:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:50.322 09:26:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:50.322 09:26:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:50.322 09:26:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:50.322 09:26:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:50.322 09:26:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:50.322 09:26:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:50.322 09:26:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:16:50.322 09:26:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:50.322 09:26:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:50.322 09:26:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:50.322 09:26:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:50.322 09:26:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:50.322 09:26:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.322 09:26:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:50.322 09:26:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.322 09:26:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:50.322 09:26:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:50.322 09:26:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:16:50.322 09:26:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:50.890 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:50.890 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:50.890 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:50.890 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:50.890 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:50.891 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:50.891 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:50.891 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:16:50.891 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:50.891 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:50.891 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:50.891 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:50.891 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:50.891 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:50.891 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:50.891 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:50.891 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:50.891 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:50.891 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:50.891 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:50.891 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:50.891 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:50.891 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:50.891 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:50.891 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:50.891 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:50.891 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:50.891 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:50.891 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:50.891 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:50.891 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:50.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:50.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:16:50.891 00:16:50.891 --- 10.0.0.2 ping statistics --- 00:16:50.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:50.891 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:16:50.891 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:50.891 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:50.891 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:16:50.891 00:16:50.891 --- 10.0.0.1 ping statistics --- 00:16:50.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:50.891 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:16:50.891 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:50.891 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:16:50.891 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:50.891 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:50.891 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:50.891 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:50.891 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:50.891 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:50.891 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:51.149 09:26:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:16:51.149 09:26:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:16:51.149 09:26:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:51.149 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:51.149 09:26:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:51.149 09:26:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:51.149 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=713680 00:16:51.149 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:51.149 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 713680 00:16:51.149 09:26:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 713680 ']' 00:16:51.149 09:26:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.149 09:26:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:51.149 09:26:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.149 09:26:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:51.149 09:26:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:51.149 [2024-07-14 09:26:35.408889] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:16:51.149 [2024-07-14 09:26:35.408970] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:51.149 EAL: No free 2048 kB hugepages reported on node 1 00:16:51.149 [2024-07-14 09:26:35.474993] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:51.149 [2024-07-14 09:26:35.562108] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:51.149 [2024-07-14 09:26:35.562178] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:51.149 [2024-07-14 09:26:35.562201] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:51.149 [2024-07-14 09:26:35.562212] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:51.149 [2024-07-14 09:26:35.562222] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:51.149 [2024-07-14 09:26:35.562302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:51.149 [2024-07-14 09:26:35.562368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:51.149 [2024-07-14 09:26:35.562438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:51.149 [2024-07-14 09:26:35.562435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:51.407 09:26:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:51.407 09:26:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:16:51.407 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:51.407 09:26:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:51.407 09:26:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:51.407 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:51.407 09:26:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:51.407 09:26:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.407 09:26:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:51.408 [2024-07-14 09:26:35.704485] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:51.408 09:26:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.408 09:26:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:51.408 09:26:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:51.408 09:26:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:51.408 09:26:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:51.408 09:26:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:16:51.408 09:26:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:16:51.408 09:26:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.408 09:26:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:51.408 Malloc0 00:16:51.408 [2024-07-14 09:26:35.765512] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:51.408 09:26:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.408 09:26:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:51.408 09:26:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:51.408 09:26:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:51.408 09:26:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=713831 00:16:51.408 09:26:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 713831 /var/tmp/bdevperf.sock 00:16:51.408 09:26:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 713831 ']' 00:16:51.408 09:26:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:51.408 09:26:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:51.408 09:26:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:51.408 09:26:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:51.408 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:51.408 09:26:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:51.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:51.408 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:51.408 09:26:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:51.408 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:51.408 09:26:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:51.408 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:51.408 { 00:16:51.408 "params": { 00:16:51.408 "name": "Nvme$subsystem", 00:16:51.408 "trtype": "$TEST_TRANSPORT", 00:16:51.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:51.408 "adrfam": "ipv4", 00:16:51.408 "trsvcid": "$NVMF_PORT", 00:16:51.408 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:51.408 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:51.408 "hdgst": ${hdgst:-false}, 00:16:51.408 "ddgst": ${ddgst:-false} 00:16:51.408 }, 00:16:51.408 "method": "bdev_nvme_attach_controller" 00:16:51.408 } 00:16:51.408 EOF 00:16:51.408 )") 00:16:51.408 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:51.408 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:51.408 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:51.408 09:26:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:51.408 "params": { 00:16:51.408 "name": "Nvme0", 00:16:51.408 "trtype": "tcp", 00:16:51.408 "traddr": "10.0.0.2", 00:16:51.408 "adrfam": "ipv4", 00:16:51.408 "trsvcid": "4420", 00:16:51.408 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:51.408 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:51.408 "hdgst": false, 00:16:51.408 "ddgst": false 00:16:51.408 }, 00:16:51.408 "method": "bdev_nvme_attach_controller" 00:16:51.408 }' 00:16:51.408 [2024-07-14 09:26:35.843528] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:16:51.408 [2024-07-14 09:26:35.843616] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid713831 ] 00:16:51.666 EAL: No free 2048 kB hugepages reported on node 1 00:16:51.666 [2024-07-14 09:26:35.906200] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.666 [2024-07-14 09:26:35.993487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.924 Running I/O for 10 seconds... 00:16:51.924 09:26:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:51.924 09:26:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:16:51.924 09:26:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:51.924 09:26:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.924 09:26:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:51.924 09:26:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.924 09:26:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:51.924 09:26:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:51.924 09:26:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:51.924 09:26:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:51.924 09:26:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:16:51.924 09:26:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:16:51.924 09:26:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:51.924 09:26:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:51.924 09:26:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:51.924 09:26:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:51.924 09:26:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.924 09:26:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:51.924 09:26:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.924 09:26:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=65 00:16:51.925 09:26:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 65 -ge 100 ']' 00:16:51.925 09:26:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:16:52.184 09:26:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:16:52.184 09:26:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:52.184 09:26:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:52.184 09:26:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:52.184 09:26:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.184 09:26:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:52.184 09:26:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.184 09:26:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=391 00:16:52.184 09:26:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 391 -ge 100 ']' 00:16:52.184 09:26:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:16:52.184 09:26:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:16:52.184 09:26:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:16:52.184 09:26:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:52.184 09:26:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.184 09:26:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:52.185 [2024-07-14 09:26:36.604778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.604832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.604863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.604889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.604907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.604932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.604949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.604963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.604989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.605004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.605020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.605035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.605050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.605064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.605080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.605095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.605110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.605125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.605141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.605156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.605172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.605186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.605223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.605239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.605255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.605271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.605287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.605302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.605318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.605333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.605349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.605363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.605378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.605392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.605409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.605424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.605440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.605454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.605470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.605484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.605500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.605515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.605531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:57856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.605546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.605562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.605577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.605594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:58112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.605611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.605628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:58240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.605643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.605659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:58368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.605673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.605689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:58496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.605703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.605718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.605732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.605748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:58752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.605763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.605778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.605792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.605808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.605823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.605838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:59136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.605852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.605877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:59264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.605894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.605910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:59392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.605929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.605945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.605959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.605975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.605989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.606004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.606023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.606039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:59904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.606054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.606070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:60032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.606084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.606101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:60160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.606116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.606132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.606146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.606169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:60416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.606184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.606200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.606214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.606234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.606249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.606265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:60800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.606279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.606295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:60928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.606309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.606325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:61056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.606339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.606354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:61184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.606368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.606399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.606413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.606432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.606447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.606462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:61568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.606475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.606490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:61696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.606505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.606520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:61824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.606534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.606549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.606563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.606579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.606592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.606607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.606621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.606637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:62336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.606650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.606665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.606679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.606694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.606708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.606723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.606737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.606752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:62848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.606766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.606781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.606799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.606814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.606828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.606843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:52.185 [2024-07-14 09:26:36.606857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.185 [2024-07-14 09:26:36.606894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x621420 is same with the state(5) to be set 00:16:52.185 [2024-07-14 09:26:36.606975] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x621420 was disconnected and freed. reset controller. 00:16:52.185 [2024-07-14 09:26:36.608120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:52.185 09:26:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.185 09:26:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:52.185 09:26:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.185 09:26:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:52.185 task offset: 63360 on job bdev=Nvme0n1 fails 00:16:52.185 00:16:52.185 Latency(us) 00:16:52.185 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:52.185 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:52.185 Job: Nvme0n1 ended in about 0.39 seconds with error 00:16:52.185 Verification LBA range: start 0x0 length 0x400 00:16:52.185 Nvme0n1 : 0.39 1145.00 71.56 162.12 0.00 47631.40 2718.53 43302.31 00:16:52.185 =================================================================================================================== 00:16:52.185 Total : 1145.00 71.56 162.12 0.00 47631.40 2718.53 43302.31 00:16:52.185 [2024-07-14 09:26:36.610030] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:52.185 [2024-07-14 09:26:36.610059] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x627000 (9): Bad file descriptor 00:16:52.185 09:26:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.185 09:26:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:16:52.185 [2024-07-14 09:26:36.622570] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:53.557 09:26:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 713831 00:16:53.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (713831) - No such process 00:16:53.557 09:26:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:16:53.557 09:26:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:53.557 09:26:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:53.557 09:26:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:53.557 09:26:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:53.557 09:26:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:53.557 09:26:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:53.557 09:26:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:53.557 { 00:16:53.557 "params": { 00:16:53.557 "name": "Nvme$subsystem", 00:16:53.557 "trtype": "$TEST_TRANSPORT", 00:16:53.557 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:53.557 "adrfam": "ipv4", 00:16:53.557 "trsvcid": "$NVMF_PORT", 00:16:53.557 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:53.557 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:53.557 "hdgst": ${hdgst:-false}, 00:16:53.557 "ddgst": ${ddgst:-false} 00:16:53.557 }, 00:16:53.557 "method": "bdev_nvme_attach_controller" 00:16:53.557 } 00:16:53.557 EOF 00:16:53.557 )") 00:16:53.557 09:26:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:53.557 09:26:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:53.557 09:26:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:53.557 09:26:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:53.557 "params": { 00:16:53.557 "name": "Nvme0", 00:16:53.557 "trtype": "tcp", 00:16:53.557 "traddr": "10.0.0.2", 00:16:53.557 "adrfam": "ipv4", 00:16:53.557 "trsvcid": "4420", 00:16:53.557 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:53.557 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:53.557 "hdgst": false, 00:16:53.557 "ddgst": false 00:16:53.558 }, 00:16:53.558 "method": "bdev_nvme_attach_controller" 00:16:53.558 }' 00:16:53.558 [2024-07-14 09:26:37.665537] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:16:53.558 [2024-07-14 09:26:37.665644] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid714001 ] 00:16:53.558 EAL: No free 2048 kB hugepages reported on node 1 00:16:53.558 [2024-07-14 09:26:37.730991] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.558 [2024-07-14 09:26:37.819966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.815 Running I/O for 1 seconds... 00:16:54.750 00:16:54.751 Latency(us) 00:16:54.751 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:54.751 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:54.751 Verification LBA range: start 0x0 length 0x400 00:16:54.751 Nvme0n1 : 1.05 1153.72 72.11 0.00 0.00 54717.05 16796.63 42525.58 00:16:54.751 =================================================================================================================== 00:16:54.751 Total : 1153.72 72.11 0.00 0.00 54717.05 16796.63 42525.58 00:16:55.009 09:26:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:16:55.009 09:26:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:55.009 09:26:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:55.009 09:26:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:55.009 09:26:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:16:55.009 09:26:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:55.009 09:26:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:16:55.009 09:26:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:55.009 09:26:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:16:55.009 09:26:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:55.009 09:26:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:55.009 rmmod nvme_tcp 00:16:55.009 rmmod nvme_fabrics 00:16:55.009 rmmod nvme_keyring 00:16:55.009 09:26:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:55.009 09:26:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:16:55.009 09:26:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:16:55.009 09:26:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 713680 ']' 00:16:55.009 09:26:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 713680 00:16:55.009 09:26:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 713680 ']' 00:16:55.009 09:26:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 713680 00:16:55.009 09:26:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:16:55.270 09:26:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:55.270 09:26:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 713680 00:16:55.270 09:26:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:55.270 09:26:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:55.270 09:26:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 713680' 00:16:55.270 killing process with pid 713680 00:16:55.270 09:26:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 713680 00:16:55.270 09:26:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 713680 00:16:55.270 [2024-07-14 09:26:39.714701] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:55.569 09:26:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:55.569 09:26:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:55.569 09:26:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:55.569 09:26:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:55.569 09:26:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:55.569 09:26:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:55.569 09:26:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:55.569 09:26:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:57.473 09:26:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:57.474 09:26:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:16:57.474 00:16:57.474 real 0m8.607s 00:16:57.474 user 0m18.968s 00:16:57.474 sys 0m2.682s 00:16:57.474 09:26:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:57.474 09:26:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:57.474 ************************************ 00:16:57.474 END TEST nvmf_host_management 00:16:57.474 ************************************ 00:16:57.474 09:26:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:57.474 09:26:41 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:57.474 09:26:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:57.474 09:26:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:57.474 09:26:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:57.474 ************************************ 00:16:57.474 START TEST nvmf_lvol 00:16:57.474 ************************************ 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:57.474 * Looking for test storage... 00:16:57.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:16:57.474 09:26:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:59.375 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:59.375 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:16:59.375 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:59.634 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:59.634 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:59.634 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:59.634 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:59.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:59.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:16:59.634 00:16:59.634 --- 10.0.0.2 ping statistics --- 00:16:59.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:59.634 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:59.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:59.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:16:59.634 00:16:59.634 --- 10.0.0.1 ping statistics --- 00:16:59.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:59.634 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:59.634 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=716190 00:16:59.635 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:59.635 09:26:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 716190 00:16:59.635 09:26:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 716190 ']' 00:16:59.635 09:26:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.635 09:26:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:59.635 09:26:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.635 09:26:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:59.635 09:26:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:59.635 [2024-07-14 09:26:44.041355] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:16:59.635 [2024-07-14 09:26:44.041457] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:59.635 EAL: No free 2048 kB hugepages reported on node 1 00:16:59.893 [2024-07-14 09:26:44.112351] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:59.893 [2024-07-14 09:26:44.201676] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:59.893 [2024-07-14 09:26:44.201740] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:59.893 [2024-07-14 09:26:44.201766] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:59.893 [2024-07-14 09:26:44.201779] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:59.893 [2024-07-14 09:26:44.201791] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:59.893 [2024-07-14 09:26:44.201888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:59.893 [2024-07-14 09:26:44.201934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:59.893 [2024-07-14 09:26:44.201937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.893 09:26:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:59.894 09:26:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:16:59.894 09:26:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:59.894 09:26:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:59.894 09:26:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:59.894 09:26:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:59.894 09:26:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:00.152 [2024-07-14 09:26:44.567674] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:00.152 09:26:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:00.410 09:26:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:17:00.410 09:26:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:00.668 09:26:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:17:00.668 09:26:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:17:00.926 09:26:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:17:01.184 09:26:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=6b192b70-4d81-4a99-b696-1ef0deb757b3 00:17:01.184 09:26:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6b192b70-4d81-4a99-b696-1ef0deb757b3 lvol 20 00:17:01.442 09:26:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=68d79863-5a5a-4eee-b8bd-3d81871ccb5e 00:17:01.442 09:26:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:01.700 09:26:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 68d79863-5a5a-4eee-b8bd-3d81871ccb5e 00:17:01.958 09:26:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:02.215 [2024-07-14 09:26:46.571467] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:02.215 09:26:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:02.473 09:26:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=716605 00:17:02.473 09:26:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:17:02.473 09:26:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:17:02.473 EAL: No free 2048 kB hugepages reported on node 1 00:17:03.409 09:26:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 68d79863-5a5a-4eee-b8bd-3d81871ccb5e MY_SNAPSHOT 00:17:03.668 09:26:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=caf3d48e-6b27-4564-9a64-bfde7ec815ab 00:17:03.668 09:26:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 68d79863-5a5a-4eee-b8bd-3d81871ccb5e 30 00:17:04.235 09:26:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone caf3d48e-6b27-4564-9a64-bfde7ec815ab MY_CLONE 00:17:04.235 09:26:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=c3461c66-1316-428b-900f-2dd2e2724944 00:17:04.235 09:26:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate c3461c66-1316-428b-900f-2dd2e2724944 00:17:04.803 09:26:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 716605 00:17:12.912 Initializing NVMe Controllers 00:17:12.912 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:17:12.912 Controller IO queue size 128, less than required. 00:17:12.912 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:12.912 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:17:12.912 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:17:12.912 Initialization complete. Launching workers. 00:17:12.912 ======================================================== 00:17:12.912 Latency(us) 00:17:12.912 Device Information : IOPS MiB/s Average min max 00:17:12.912 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9997.50 39.05 12811.53 440.89 80063.80 00:17:12.912 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10658.20 41.63 12016.67 2144.41 75667.09 00:17:12.912 ======================================================== 00:17:12.912 Total : 20655.70 80.69 12401.39 440.89 80063.80 00:17:12.912 00:17:12.912 09:26:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:13.169 09:26:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 68d79863-5a5a-4eee-b8bd-3d81871ccb5e 00:17:13.426 09:26:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6b192b70-4d81-4a99-b696-1ef0deb757b3 00:17:13.684 09:26:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:17:13.684 09:26:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:17:13.684 09:26:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:17:13.684 09:26:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:13.684 09:26:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:17:13.684 09:26:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:13.684 09:26:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:17:13.684 09:26:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:13.684 09:26:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:13.684 rmmod nvme_tcp 00:17:13.684 rmmod nvme_fabrics 00:17:13.684 rmmod nvme_keyring 00:17:13.684 09:26:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:13.684 09:26:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:17:13.684 09:26:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:17:13.684 09:26:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 716190 ']' 00:17:13.684 09:26:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 716190 00:17:13.684 09:26:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 716190 ']' 00:17:13.684 09:26:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 716190 00:17:13.684 09:26:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:17:13.684 09:26:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:13.684 09:26:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 716190 00:17:13.684 09:26:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:13.684 09:26:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:13.684 09:26:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 716190' 00:17:13.684 killing process with pid 716190 00:17:13.684 09:26:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 716190 00:17:13.684 09:26:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 716190 00:17:14.249 09:26:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:14.249 09:26:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:14.249 09:26:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:14.249 09:26:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:14.249 09:26:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:14.249 09:26:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:14.249 09:26:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:14.249 09:26:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.175 09:27:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:16.175 00:17:16.175 real 0m18.615s 00:17:16.175 user 1m3.275s 00:17:16.175 sys 0m5.801s 00:17:16.175 09:27:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:16.175 09:27:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:16.175 ************************************ 00:17:16.175 END TEST nvmf_lvol 00:17:16.175 ************************************ 00:17:16.175 09:27:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:16.175 09:27:00 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:16.175 09:27:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:16.175 09:27:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:16.175 09:27:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:16.175 ************************************ 00:17:16.175 START TEST nvmf_lvs_grow 00:17:16.175 ************************************ 00:17:16.175 09:27:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:16.175 * Looking for test storage... 00:17:16.175 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:16.175 09:27:00 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:16.175 09:27:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:17:16.175 09:27:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:16.175 09:27:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:16.175 09:27:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:16.175 09:27:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:16.175 09:27:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:16.175 09:27:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:16.175 09:27:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:16.175 09:27:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:16.175 09:27:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:16.175 09:27:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:16.175 09:27:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:16.175 09:27:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:16.175 09:27:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:16.175 09:27:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:16.175 09:27:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:16.175 09:27:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:16.175 09:27:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:16.175 09:27:00 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:16.175 09:27:00 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:16.175 09:27:00 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:16.176 09:27:00 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.176 09:27:00 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.176 09:27:00 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.176 09:27:00 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:17:16.176 09:27:00 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.176 09:27:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:17:16.176 09:27:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:16.176 09:27:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:16.176 09:27:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:16.176 09:27:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:16.176 09:27:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:16.176 09:27:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:16.176 09:27:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:16.176 09:27:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:16.176 09:27:00 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:16.176 09:27:00 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:16.176 09:27:00 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:17:16.176 09:27:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:16.176 09:27:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:16.176 09:27:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:16.176 09:27:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:16.176 09:27:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:16.176 09:27:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:16.176 09:27:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:16.176 09:27:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.176 09:27:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:16.176 09:27:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:16.176 09:27:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:17:16.176 09:27:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:18.075 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:18.075 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:18.075 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:18.075 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:18.075 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:18.332 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:18.332 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:18.332 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:18.332 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:18.332 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:17:18.332 00:17:18.332 --- 10.0.0.2 ping statistics --- 00:17:18.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.332 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:17:18.332 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:18.332 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:18.332 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:17:18.332 00:17:18.332 --- 10.0.0.1 ping statistics --- 00:17:18.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.332 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:17:18.332 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:18.332 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:17:18.332 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:18.332 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:18.332 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:18.332 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:18.332 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:18.332 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:18.332 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:18.332 09:27:02 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:17:18.332 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:18.332 09:27:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:18.332 09:27:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:18.332 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:18.332 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=719975 00:17:18.332 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 719975 00:17:18.332 09:27:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 719975 ']' 00:17:18.332 09:27:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.332 09:27:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:18.332 09:27:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.333 09:27:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:18.333 09:27:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:18.333 [2024-07-14 09:27:02.641356] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:17:18.333 [2024-07-14 09:27:02.641437] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:18.333 EAL: No free 2048 kB hugepages reported on node 1 00:17:18.333 [2024-07-14 09:27:02.715033] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.590 [2024-07-14 09:27:02.802168] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:18.590 [2024-07-14 09:27:02.802215] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:18.590 [2024-07-14 09:27:02.802228] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:18.590 [2024-07-14 09:27:02.802239] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:18.590 [2024-07-14 09:27:02.802248] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:18.590 [2024-07-14 09:27:02.802274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.590 09:27:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:18.590 09:27:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:17:18.590 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:18.590 09:27:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:18.590 09:27:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:18.590 09:27:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:18.590 09:27:02 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:18.852 [2024-07-14 09:27:03.208803] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:18.852 09:27:03 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:17:18.852 09:27:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:18.852 09:27:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:18.852 09:27:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:18.852 ************************************ 00:17:18.852 START TEST lvs_grow_clean 00:17:18.852 ************************************ 00:17:18.852 09:27:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:17:18.852 09:27:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:18.852 09:27:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:18.852 09:27:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:18.852 09:27:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:18.852 09:27:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:18.852 09:27:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:18.852 09:27:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:18.852 09:27:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:18.852 09:27:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:19.415 09:27:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:19.415 09:27:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:19.415 09:27:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=03ce64e1-a3d9-4b4f-b0dc-8ec864bb43ce 00:17:19.415 09:27:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03ce64e1-a3d9-4b4f-b0dc-8ec864bb43ce 00:17:19.415 09:27:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:19.672 09:27:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:19.672 09:27:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:19.672 09:27:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 03ce64e1-a3d9-4b4f-b0dc-8ec864bb43ce lvol 150 00:17:19.928 09:27:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=7a73ace0-3b78-4668-927a-606da91f8454 00:17:19.929 09:27:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:19.929 09:27:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:20.186 [2024-07-14 09:27:04.604143] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:20.186 [2024-07-14 09:27:04.604240] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:20.186 true 00:17:20.186 09:27:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03ce64e1-a3d9-4b4f-b0dc-8ec864bb43ce 00:17:20.186 09:27:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:20.444 09:27:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:20.444 09:27:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:20.702 09:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7a73ace0-3b78-4668-927a-606da91f8454 00:17:20.961 09:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:21.218 [2024-07-14 09:27:05.623313] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:21.218 09:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:21.477 09:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=720376 00:17:21.477 09:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:21.477 09:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:21.477 09:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 720376 /var/tmp/bdevperf.sock 00:17:21.477 09:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 720376 ']' 00:17:21.477 09:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:21.477 09:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:21.477 09:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:21.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:21.477 09:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:21.477 09:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:21.477 [2024-07-14 09:27:05.929443] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:17:21.477 [2024-07-14 09:27:05.929533] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid720376 ] 00:17:21.735 EAL: No free 2048 kB hugepages reported on node 1 00:17:21.735 [2024-07-14 09:27:05.988862] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.735 [2024-07-14 09:27:06.074732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:21.735 09:27:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:21.736 09:27:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:17:21.736 09:27:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:22.302 Nvme0n1 00:17:22.302 09:27:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:22.560 [ 00:17:22.560 { 00:17:22.560 "name": "Nvme0n1", 00:17:22.560 "aliases": [ 00:17:22.560 "7a73ace0-3b78-4668-927a-606da91f8454" 00:17:22.560 ], 00:17:22.560 "product_name": "NVMe disk", 00:17:22.560 "block_size": 4096, 00:17:22.560 "num_blocks": 38912, 00:17:22.560 "uuid": "7a73ace0-3b78-4668-927a-606da91f8454", 00:17:22.560 "assigned_rate_limits": { 00:17:22.560 "rw_ios_per_sec": 0, 00:17:22.560 "rw_mbytes_per_sec": 0, 00:17:22.560 "r_mbytes_per_sec": 0, 00:17:22.560 "w_mbytes_per_sec": 0 00:17:22.560 }, 00:17:22.560 "claimed": false, 00:17:22.560 "zoned": false, 00:17:22.560 "supported_io_types": { 00:17:22.560 "read": true, 00:17:22.560 "write": true, 00:17:22.560 "unmap": true, 00:17:22.560 "flush": true, 00:17:22.560 "reset": true, 00:17:22.560 "nvme_admin": true, 00:17:22.560 "nvme_io": true, 00:17:22.560 "nvme_io_md": false, 00:17:22.560 "write_zeroes": true, 00:17:22.560 "zcopy": false, 00:17:22.560 "get_zone_info": false, 00:17:22.560 "zone_management": false, 00:17:22.560 "zone_append": false, 00:17:22.560 "compare": true, 00:17:22.560 "compare_and_write": true, 00:17:22.560 "abort": true, 00:17:22.560 "seek_hole": false, 00:17:22.560 "seek_data": false, 00:17:22.560 "copy": true, 00:17:22.560 "nvme_iov_md": false 00:17:22.560 }, 00:17:22.560 "memory_domains": [ 00:17:22.560 { 00:17:22.560 "dma_device_id": "system", 00:17:22.560 "dma_device_type": 1 00:17:22.560 } 00:17:22.560 ], 00:17:22.560 "driver_specific": { 00:17:22.560 "nvme": [ 00:17:22.560 { 00:17:22.560 "trid": { 00:17:22.560 "trtype": "TCP", 00:17:22.560 "adrfam": "IPv4", 00:17:22.560 "traddr": "10.0.0.2", 00:17:22.560 "trsvcid": "4420", 00:17:22.560 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:22.560 }, 00:17:22.560 "ctrlr_data": { 00:17:22.560 "cntlid": 1, 00:17:22.560 "vendor_id": "0x8086", 00:17:22.560 "model_number": "SPDK bdev Controller", 00:17:22.560 "serial_number": "SPDK0", 00:17:22.560 "firmware_revision": "24.09", 00:17:22.560 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:22.560 "oacs": { 00:17:22.560 "security": 0, 00:17:22.560 "format": 0, 00:17:22.560 "firmware": 0, 00:17:22.560 "ns_manage": 0 00:17:22.560 }, 00:17:22.560 "multi_ctrlr": true, 00:17:22.560 "ana_reporting": false 00:17:22.560 }, 00:17:22.560 "vs": { 00:17:22.560 "nvme_version": "1.3" 00:17:22.560 }, 00:17:22.560 "ns_data": { 00:17:22.560 "id": 1, 00:17:22.560 "can_share": true 00:17:22.560 } 00:17:22.560 } 00:17:22.560 ], 00:17:22.560 "mp_policy": "active_passive" 00:17:22.560 } 00:17:22.560 } 00:17:22.560 ] 00:17:22.560 09:27:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=720439 00:17:22.560 09:27:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:22.560 09:27:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:22.818 Running I/O for 10 seconds... 00:17:23.755 Latency(us) 00:17:23.755 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:23.755 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:23.755 Nvme0n1 : 1.00 13638.00 53.27 0.00 0.00 0.00 0.00 0.00 00:17:23.755 =================================================================================================================== 00:17:23.755 Total : 13638.00 53.27 0.00 0.00 0.00 0.00 0.00 00:17:23.755 00:17:24.691 09:27:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 03ce64e1-a3d9-4b4f-b0dc-8ec864bb43ce 00:17:24.691 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:24.691 Nvme0n1 : 2.00 13763.00 53.76 0.00 0.00 0.00 0.00 0.00 00:17:24.691 =================================================================================================================== 00:17:24.691 Total : 13763.00 53.76 0.00 0.00 0.00 0.00 0.00 00:17:24.691 00:17:24.950 true 00:17:24.950 09:27:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03ce64e1-a3d9-4b4f-b0dc-8ec864bb43ce 00:17:24.950 09:27:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:25.209 09:27:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:25.209 09:27:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:25.209 09:27:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 720439 00:17:25.776 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:25.776 Nvme0n1 : 3.00 13959.67 54.53 0.00 0.00 0.00 0.00 0.00 00:17:25.776 =================================================================================================================== 00:17:25.776 Total : 13959.67 54.53 0.00 0.00 0.00 0.00 0.00 00:17:25.776 00:17:26.711 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:26.711 Nvme0n1 : 4.00 14033.25 54.82 0.00 0.00 0.00 0.00 0.00 00:17:26.711 =================================================================================================================== 00:17:26.711 Total : 14033.25 54.82 0.00 0.00 0.00 0.00 0.00 00:17:26.711 00:17:27.645 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:27.645 Nvme0n1 : 5.00 14106.80 55.10 0.00 0.00 0.00 0.00 0.00 00:17:27.645 =================================================================================================================== 00:17:27.645 Total : 14106.80 55.10 0.00 0.00 0.00 0.00 0.00 00:17:27.645 00:17:29.022 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:29.022 Nvme0n1 : 6.00 14208.83 55.50 0.00 0.00 0.00 0.00 0.00 00:17:29.022 =================================================================================================================== 00:17:29.022 Total : 14208.83 55.50 0.00 0.00 0.00 0.00 0.00 00:17:29.022 00:17:29.985 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:29.985 Nvme0n1 : 7.00 14254.57 55.68 0.00 0.00 0.00 0.00 0.00 00:17:29.985 =================================================================================================================== 00:17:29.985 Total : 14254.57 55.68 0.00 0.00 0.00 0.00 0.00 00:17:29.985 00:17:30.920 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:30.920 Nvme0n1 : 8.00 14282.88 55.79 0.00 0.00 0.00 0.00 0.00 00:17:30.920 =================================================================================================================== 00:17:30.920 Total : 14282.88 55.79 0.00 0.00 0.00 0.00 0.00 00:17:30.920 00:17:31.854 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:31.854 Nvme0n1 : 9.00 14308.11 55.89 0.00 0.00 0.00 0.00 0.00 00:17:31.854 =================================================================================================================== 00:17:31.854 Total : 14308.11 55.89 0.00 0.00 0.00 0.00 0.00 00:17:31.854 00:17:32.788 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:32.788 Nvme0n1 : 10.00 14336.50 56.00 0.00 0.00 0.00 0.00 0.00 00:17:32.788 =================================================================================================================== 00:17:32.788 Total : 14336.50 56.00 0.00 0.00 0.00 0.00 0.00 00:17:32.788 00:17:32.788 00:17:32.788 Latency(us) 00:17:32.788 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.788 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:32.788 Nvme0n1 : 10.01 14335.16 56.00 0.00 0.00 8922.44 4004.98 13883.92 00:17:32.788 =================================================================================================================== 00:17:32.788 Total : 14335.16 56.00 0.00 0.00 8922.44 4004.98 13883.92 00:17:32.788 0 00:17:32.788 09:27:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 720376 00:17:32.788 09:27:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 720376 ']' 00:17:32.788 09:27:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 720376 00:17:32.788 09:27:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:17:32.788 09:27:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:32.788 09:27:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 720376 00:17:32.788 09:27:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:32.788 09:27:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:32.788 09:27:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 720376' 00:17:32.788 killing process with pid 720376 00:17:32.788 09:27:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 720376 00:17:32.788 Received shutdown signal, test time was about 10.000000 seconds 00:17:32.788 00:17:32.788 Latency(us) 00:17:32.788 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.788 =================================================================================================================== 00:17:32.788 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:32.788 09:27:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 720376 00:17:33.046 09:27:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:33.304 09:27:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:33.562 09:27:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03ce64e1-a3d9-4b4f-b0dc-8ec864bb43ce 00:17:33.562 09:27:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:33.820 09:27:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:33.820 09:27:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:17:33.820 09:27:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:34.082 [2024-07-14 09:27:18.435662] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:34.082 09:27:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03ce64e1-a3d9-4b4f-b0dc-8ec864bb43ce 00:17:34.082 09:27:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:17:34.082 09:27:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03ce64e1-a3d9-4b4f-b0dc-8ec864bb43ce 00:17:34.082 09:27:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:34.082 09:27:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:34.082 09:27:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:34.082 09:27:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:34.082 09:27:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:34.082 09:27:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:34.082 09:27:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:34.082 09:27:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:34.082 09:27:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03ce64e1-a3d9-4b4f-b0dc-8ec864bb43ce 00:17:34.340 request: 00:17:34.340 { 00:17:34.340 "uuid": "03ce64e1-a3d9-4b4f-b0dc-8ec864bb43ce", 00:17:34.340 "method": "bdev_lvol_get_lvstores", 00:17:34.340 "req_id": 1 00:17:34.340 } 00:17:34.340 Got JSON-RPC error response 00:17:34.340 response: 00:17:34.340 { 00:17:34.340 "code": -19, 00:17:34.340 "message": "No such device" 00:17:34.340 } 00:17:34.340 09:27:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:17:34.340 09:27:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:34.340 09:27:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:34.340 09:27:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:34.340 09:27:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:34.598 aio_bdev 00:17:34.598 09:27:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7a73ace0-3b78-4668-927a-606da91f8454 00:17:34.598 09:27:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=7a73ace0-3b78-4668-927a-606da91f8454 00:17:34.598 09:27:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:34.598 09:27:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:17:34.598 09:27:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:34.598 09:27:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:34.598 09:27:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:34.856 09:27:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7a73ace0-3b78-4668-927a-606da91f8454 -t 2000 00:17:35.114 [ 00:17:35.114 { 00:17:35.114 "name": "7a73ace0-3b78-4668-927a-606da91f8454", 00:17:35.114 "aliases": [ 00:17:35.114 "lvs/lvol" 00:17:35.114 ], 00:17:35.114 "product_name": "Logical Volume", 00:17:35.114 "block_size": 4096, 00:17:35.114 "num_blocks": 38912, 00:17:35.114 "uuid": "7a73ace0-3b78-4668-927a-606da91f8454", 00:17:35.114 "assigned_rate_limits": { 00:17:35.114 "rw_ios_per_sec": 0, 00:17:35.114 "rw_mbytes_per_sec": 0, 00:17:35.114 "r_mbytes_per_sec": 0, 00:17:35.114 "w_mbytes_per_sec": 0 00:17:35.114 }, 00:17:35.114 "claimed": false, 00:17:35.114 "zoned": false, 00:17:35.114 "supported_io_types": { 00:17:35.114 "read": true, 00:17:35.114 "write": true, 00:17:35.114 "unmap": true, 00:17:35.114 "flush": false, 00:17:35.114 "reset": true, 00:17:35.114 "nvme_admin": false, 00:17:35.114 "nvme_io": false, 00:17:35.114 "nvme_io_md": false, 00:17:35.114 "write_zeroes": true, 00:17:35.114 "zcopy": false, 00:17:35.114 "get_zone_info": false, 00:17:35.114 "zone_management": false, 00:17:35.114 "zone_append": false, 00:17:35.114 "compare": false, 00:17:35.114 "compare_and_write": false, 00:17:35.114 "abort": false, 00:17:35.114 "seek_hole": true, 00:17:35.114 "seek_data": true, 00:17:35.114 "copy": false, 00:17:35.114 "nvme_iov_md": false 00:17:35.114 }, 00:17:35.114 "driver_specific": { 00:17:35.114 "lvol": { 00:17:35.114 "lvol_store_uuid": "03ce64e1-a3d9-4b4f-b0dc-8ec864bb43ce", 00:17:35.114 "base_bdev": "aio_bdev", 00:17:35.114 "thin_provision": false, 00:17:35.114 "num_allocated_clusters": 38, 00:17:35.114 "snapshot": false, 00:17:35.114 "clone": false, 00:17:35.114 "esnap_clone": false 00:17:35.114 } 00:17:35.114 } 00:17:35.114 } 00:17:35.114 ] 00:17:35.114 09:27:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:17:35.114 09:27:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03ce64e1-a3d9-4b4f-b0dc-8ec864bb43ce 00:17:35.114 09:27:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:35.681 09:27:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:35.681 09:27:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03ce64e1-a3d9-4b4f-b0dc-8ec864bb43ce 00:17:35.682 09:27:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:35.682 09:27:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:35.682 09:27:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7a73ace0-3b78-4668-927a-606da91f8454 00:17:35.940 09:27:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 03ce64e1-a3d9-4b4f-b0dc-8ec864bb43ce 00:17:36.506 09:27:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:36.763 09:27:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:36.763 00:17:36.763 real 0m17.727s 00:17:36.763 user 0m17.124s 00:17:36.763 sys 0m1.923s 00:17:36.763 09:27:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:36.763 09:27:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:36.763 ************************************ 00:17:36.763 END TEST lvs_grow_clean 00:17:36.763 ************************************ 00:17:36.763 09:27:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:17:36.763 09:27:21 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:36.763 09:27:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:36.763 09:27:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:36.763 09:27:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:36.763 ************************************ 00:17:36.763 START TEST lvs_grow_dirty 00:17:36.763 ************************************ 00:17:36.763 09:27:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:17:36.763 09:27:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:36.763 09:27:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:36.763 09:27:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:36.763 09:27:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:36.763 09:27:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:36.763 09:27:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:36.763 09:27:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:36.763 09:27:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:36.763 09:27:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:37.021 09:27:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:37.021 09:27:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:37.279 09:27:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=86b69cac-0a72-4d54-8f36-f9f151b5d0e2 00:17:37.279 09:27:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86b69cac-0a72-4d54-8f36-f9f151b5d0e2 00:17:37.279 09:27:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:37.537 09:27:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:37.537 09:27:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:37.537 09:27:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 86b69cac-0a72-4d54-8f36-f9f151b5d0e2 lvol 150 00:17:37.795 09:27:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=0cb75445-58f9-4e00-9e17-3d5f0d0ad022 00:17:37.795 09:27:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:37.795 09:27:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:38.053 [2024-07-14 09:27:22.272004] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:38.053 [2024-07-14 09:27:22.272099] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:38.053 true 00:17:38.053 09:27:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86b69cac-0a72-4d54-8f36-f9f151b5d0e2 00:17:38.054 09:27:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:38.312 09:27:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:38.312 09:27:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:38.570 09:27:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0cb75445-58f9-4e00-9e17-3d5f0d0ad022 00:17:38.828 09:27:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:39.085 [2024-07-14 09:27:23.315169] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:39.085 09:27:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:39.343 09:27:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=722974 00:17:39.343 09:27:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:39.343 09:27:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:39.343 09:27:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 722974 /var/tmp/bdevperf.sock 00:17:39.343 09:27:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 722974 ']' 00:17:39.343 09:27:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:39.343 09:27:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:39.343 09:27:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:39.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:39.343 09:27:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:39.343 09:27:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:39.343 [2024-07-14 09:27:23.614291] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:17:39.343 [2024-07-14 09:27:23.614361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid722974 ] 00:17:39.343 EAL: No free 2048 kB hugepages reported on node 1 00:17:39.343 [2024-07-14 09:27:23.675168] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.343 [2024-07-14 09:27:23.765770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:39.601 09:27:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:39.601 09:27:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:17:39.601 09:27:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:40.165 Nvme0n1 00:17:40.166 09:27:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:40.166 [ 00:17:40.166 { 00:17:40.166 "name": "Nvme0n1", 00:17:40.166 "aliases": [ 00:17:40.166 "0cb75445-58f9-4e00-9e17-3d5f0d0ad022" 00:17:40.166 ], 00:17:40.166 "product_name": "NVMe disk", 00:17:40.166 "block_size": 4096, 00:17:40.166 "num_blocks": 38912, 00:17:40.166 "uuid": "0cb75445-58f9-4e00-9e17-3d5f0d0ad022", 00:17:40.166 "assigned_rate_limits": { 00:17:40.166 "rw_ios_per_sec": 0, 00:17:40.166 "rw_mbytes_per_sec": 0, 00:17:40.166 "r_mbytes_per_sec": 0, 00:17:40.166 "w_mbytes_per_sec": 0 00:17:40.166 }, 00:17:40.166 "claimed": false, 00:17:40.166 "zoned": false, 00:17:40.166 "supported_io_types": { 00:17:40.166 "read": true, 00:17:40.166 "write": true, 00:17:40.166 "unmap": true, 00:17:40.166 "flush": true, 00:17:40.166 "reset": true, 00:17:40.166 "nvme_admin": true, 00:17:40.166 "nvme_io": true, 00:17:40.166 "nvme_io_md": false, 00:17:40.166 "write_zeroes": true, 00:17:40.166 "zcopy": false, 00:17:40.166 "get_zone_info": false, 00:17:40.166 "zone_management": false, 00:17:40.166 "zone_append": false, 00:17:40.166 "compare": true, 00:17:40.166 "compare_and_write": true, 00:17:40.166 "abort": true, 00:17:40.166 "seek_hole": false, 00:17:40.166 "seek_data": false, 00:17:40.166 "copy": true, 00:17:40.166 "nvme_iov_md": false 00:17:40.166 }, 00:17:40.166 "memory_domains": [ 00:17:40.166 { 00:17:40.166 "dma_device_id": "system", 00:17:40.166 "dma_device_type": 1 00:17:40.166 } 00:17:40.166 ], 00:17:40.166 "driver_specific": { 00:17:40.166 "nvme": [ 00:17:40.166 { 00:17:40.166 "trid": { 00:17:40.166 "trtype": "TCP", 00:17:40.166 "adrfam": "IPv4", 00:17:40.166 "traddr": "10.0.0.2", 00:17:40.166 "trsvcid": "4420", 00:17:40.166 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:40.166 }, 00:17:40.166 "ctrlr_data": { 00:17:40.166 "cntlid": 1, 00:17:40.166 "vendor_id": "0x8086", 00:17:40.166 "model_number": "SPDK bdev Controller", 00:17:40.166 "serial_number": "SPDK0", 00:17:40.166 "firmware_revision": "24.09", 00:17:40.166 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:40.166 "oacs": { 00:17:40.166 "security": 0, 00:17:40.166 "format": 0, 00:17:40.166 "firmware": 0, 00:17:40.166 "ns_manage": 0 00:17:40.166 }, 00:17:40.166 "multi_ctrlr": true, 00:17:40.166 "ana_reporting": false 00:17:40.166 }, 00:17:40.166 "vs": { 00:17:40.166 "nvme_version": "1.3" 00:17:40.166 }, 00:17:40.166 "ns_data": { 00:17:40.166 "id": 1, 00:17:40.166 "can_share": true 00:17:40.166 } 00:17:40.166 } 00:17:40.166 ], 00:17:40.166 "mp_policy": "active_passive" 00:17:40.166 } 00:17:40.166 } 00:17:40.166 ] 00:17:40.166 09:27:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=723106 00:17:40.166 09:27:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:40.166 09:27:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:40.424 Running I/O for 10 seconds... 00:17:41.358 Latency(us) 00:17:41.358 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.358 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:41.358 Nvme0n1 : 1.00 13903.00 54.31 0.00 0.00 0.00 0.00 0.00 00:17:41.358 =================================================================================================================== 00:17:41.358 Total : 13903.00 54.31 0.00 0.00 0.00 0.00 0.00 00:17:41.358 00:17:42.290 09:27:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 86b69cac-0a72-4d54-8f36-f9f151b5d0e2 00:17:42.290 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:42.290 Nvme0n1 : 2.00 14087.50 55.03 0.00 0.00 0.00 0.00 0.00 00:17:42.290 =================================================================================================================== 00:17:42.291 Total : 14087.50 55.03 0.00 0.00 0.00 0.00 0.00 00:17:42.291 00:17:42.548 true 00:17:42.548 09:27:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86b69cac-0a72-4d54-8f36-f9f151b5d0e2 00:17:42.548 09:27:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:42.805 09:27:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:42.805 09:27:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:42.805 09:27:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 723106 00:17:43.399 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:43.399 Nvme0n1 : 3.00 14127.67 55.19 0.00 0.00 0.00 0.00 0.00 00:17:43.399 =================================================================================================================== 00:17:43.399 Total : 14127.67 55.19 0.00 0.00 0.00 0.00 0.00 00:17:43.399 00:17:44.332 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:44.332 Nvme0n1 : 4.00 14196.00 55.45 0.00 0.00 0.00 0.00 0.00 00:17:44.332 =================================================================================================================== 00:17:44.332 Total : 14196.00 55.45 0.00 0.00 0.00 0.00 0.00 00:17:44.332 00:17:45.708 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:45.708 Nvme0n1 : 5.00 14249.60 55.66 0.00 0.00 0.00 0.00 0.00 00:17:45.708 =================================================================================================================== 00:17:45.708 Total : 14249.60 55.66 0.00 0.00 0.00 0.00 0.00 00:17:45.708 00:17:46.276 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:46.276 Nvme0n1 : 6.00 14306.67 55.89 0.00 0.00 0.00 0.00 0.00 00:17:46.276 =================================================================================================================== 00:17:46.276 Total : 14306.67 55.89 0.00 0.00 0.00 0.00 0.00 00:17:46.276 00:17:47.652 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:47.652 Nvme0n1 : 7.00 14347.29 56.04 0.00 0.00 0.00 0.00 0.00 00:17:47.652 =================================================================================================================== 00:17:47.652 Total : 14347.29 56.04 0.00 0.00 0.00 0.00 0.00 00:17:47.652 00:17:48.588 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:48.588 Nvme0n1 : 8.00 14379.88 56.17 0.00 0.00 0.00 0.00 0.00 00:17:48.588 =================================================================================================================== 00:17:48.588 Total : 14379.88 56.17 0.00 0.00 0.00 0.00 0.00 00:17:48.588 00:17:49.523 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:49.523 Nvme0n1 : 9.00 14465.44 56.51 0.00 0.00 0.00 0.00 0.00 00:17:49.523 =================================================================================================================== 00:17:49.523 Total : 14465.44 56.51 0.00 0.00 0.00 0.00 0.00 00:17:49.523 00:17:50.459 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:50.459 Nvme0n1 : 10.00 14516.40 56.70 0.00 0.00 0.00 0.00 0.00 00:17:50.459 =================================================================================================================== 00:17:50.459 Total : 14516.40 56.70 0.00 0.00 0.00 0.00 0.00 00:17:50.459 00:17:50.459 00:17:50.459 Latency(us) 00:17:50.459 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.459 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:50.459 Nvme0n1 : 10.01 14517.97 56.71 0.00 0.00 8810.22 2560.76 13495.56 00:17:50.459 =================================================================================================================== 00:17:50.459 Total : 14517.97 56.71 0.00 0.00 8810.22 2560.76 13495.56 00:17:50.459 0 00:17:50.459 09:27:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 722974 00:17:50.459 09:27:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 722974 ']' 00:17:50.459 09:27:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 722974 00:17:50.459 09:27:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:17:50.459 09:27:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:50.459 09:27:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 722974 00:17:50.459 09:27:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:50.459 09:27:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:50.459 09:27:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 722974' 00:17:50.459 killing process with pid 722974 00:17:50.459 09:27:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 722974 00:17:50.459 Received shutdown signal, test time was about 10.000000 seconds 00:17:50.459 00:17:50.459 Latency(us) 00:17:50.459 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.459 =================================================================================================================== 00:17:50.459 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:50.459 09:27:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 722974 00:17:50.717 09:27:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:50.975 09:27:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:51.233 09:27:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86b69cac-0a72-4d54-8f36-f9f151b5d0e2 00:17:51.233 09:27:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:51.491 09:27:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:51.491 09:27:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:17:51.491 09:27:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 719975 00:17:51.491 09:27:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 719975 00:17:51.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 719975 Killed "${NVMF_APP[@]}" "$@" 00:17:51.491 09:27:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:17:51.491 09:27:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:17:51.491 09:27:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:51.491 09:27:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:51.491 09:27:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:51.491 09:27:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=724429 00:17:51.491 09:27:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:51.491 09:27:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 724429 00:17:51.491 09:27:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 724429 ']' 00:17:51.491 09:27:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.491 09:27:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:51.491 09:27:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.491 09:27:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:51.491 09:27:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:51.491 [2024-07-14 09:27:35.926360] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:17:51.491 [2024-07-14 09:27:35.926441] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:51.749 EAL: No free 2048 kB hugepages reported on node 1 00:17:51.749 [2024-07-14 09:27:35.993825] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.749 [2024-07-14 09:27:36.079171] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:51.749 [2024-07-14 09:27:36.079250] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:51.749 [2024-07-14 09:27:36.079264] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:51.749 [2024-07-14 09:27:36.079275] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:51.749 [2024-07-14 09:27:36.079284] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:51.749 [2024-07-14 09:27:36.079321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.749 09:27:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:51.749 09:27:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:17:51.749 09:27:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:51.749 09:27:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:51.749 09:27:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:52.007 09:27:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:52.007 09:27:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:52.264 [2024-07-14 09:27:36.495837] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:52.264 [2024-07-14 09:27:36.495995] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:52.264 [2024-07-14 09:27:36.496045] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:52.264 09:27:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:17:52.264 09:27:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 0cb75445-58f9-4e00-9e17-3d5f0d0ad022 00:17:52.265 09:27:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=0cb75445-58f9-4e00-9e17-3d5f0d0ad022 00:17:52.265 09:27:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:52.265 09:27:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:17:52.265 09:27:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:52.265 09:27:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:52.265 09:27:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:52.522 09:27:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0cb75445-58f9-4e00-9e17-3d5f0d0ad022 -t 2000 00:17:52.780 [ 00:17:52.780 { 00:17:52.780 "name": "0cb75445-58f9-4e00-9e17-3d5f0d0ad022", 00:17:52.780 "aliases": [ 00:17:52.780 "lvs/lvol" 00:17:52.780 ], 00:17:52.780 "product_name": "Logical Volume", 00:17:52.780 "block_size": 4096, 00:17:52.780 "num_blocks": 38912, 00:17:52.780 "uuid": "0cb75445-58f9-4e00-9e17-3d5f0d0ad022", 00:17:52.780 "assigned_rate_limits": { 00:17:52.780 "rw_ios_per_sec": 0, 00:17:52.780 "rw_mbytes_per_sec": 0, 00:17:52.780 "r_mbytes_per_sec": 0, 00:17:52.780 "w_mbytes_per_sec": 0 00:17:52.780 }, 00:17:52.780 "claimed": false, 00:17:52.780 "zoned": false, 00:17:52.780 "supported_io_types": { 00:17:52.780 "read": true, 00:17:52.780 "write": true, 00:17:52.780 "unmap": true, 00:17:52.780 "flush": false, 00:17:52.780 "reset": true, 00:17:52.780 "nvme_admin": false, 00:17:52.780 "nvme_io": false, 00:17:52.780 "nvme_io_md": false, 00:17:52.780 "write_zeroes": true, 00:17:52.780 "zcopy": false, 00:17:52.780 "get_zone_info": false, 00:17:52.780 "zone_management": false, 00:17:52.780 "zone_append": false, 00:17:52.780 "compare": false, 00:17:52.780 "compare_and_write": false, 00:17:52.780 "abort": false, 00:17:52.780 "seek_hole": true, 00:17:52.780 "seek_data": true, 00:17:52.780 "copy": false, 00:17:52.780 "nvme_iov_md": false 00:17:52.780 }, 00:17:52.780 "driver_specific": { 00:17:52.780 "lvol": { 00:17:52.780 "lvol_store_uuid": "86b69cac-0a72-4d54-8f36-f9f151b5d0e2", 00:17:52.780 "base_bdev": "aio_bdev", 00:17:52.780 "thin_provision": false, 00:17:52.780 "num_allocated_clusters": 38, 00:17:52.780 "snapshot": false, 00:17:52.780 "clone": false, 00:17:52.780 "esnap_clone": false 00:17:52.780 } 00:17:52.780 } 00:17:52.780 } 00:17:52.780 ] 00:17:52.780 09:27:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:17:52.780 09:27:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86b69cac-0a72-4d54-8f36-f9f151b5d0e2 00:17:52.780 09:27:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:17:53.038 09:27:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:17:53.038 09:27:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86b69cac-0a72-4d54-8f36-f9f151b5d0e2 00:17:53.038 09:27:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:17:53.295 09:27:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:17:53.295 09:27:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:53.553 [2024-07-14 09:27:37.828991] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:53.553 09:27:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86b69cac-0a72-4d54-8f36-f9f151b5d0e2 00:17:53.553 09:27:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:17:53.553 09:27:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86b69cac-0a72-4d54-8f36-f9f151b5d0e2 00:17:53.553 09:27:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:53.553 09:27:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:53.553 09:27:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:53.553 09:27:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:53.553 09:27:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:53.553 09:27:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:53.553 09:27:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:53.553 09:27:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:53.553 09:27:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86b69cac-0a72-4d54-8f36-f9f151b5d0e2 00:17:53.810 request: 00:17:53.810 { 00:17:53.810 "uuid": "86b69cac-0a72-4d54-8f36-f9f151b5d0e2", 00:17:53.810 "method": "bdev_lvol_get_lvstores", 00:17:53.810 "req_id": 1 00:17:53.810 } 00:17:53.810 Got JSON-RPC error response 00:17:53.810 response: 00:17:53.810 { 00:17:53.810 "code": -19, 00:17:53.810 "message": "No such device" 00:17:53.810 } 00:17:53.810 09:27:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:17:53.810 09:27:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:53.810 09:27:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:53.810 09:27:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:53.810 09:27:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:54.068 aio_bdev 00:17:54.068 09:27:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0cb75445-58f9-4e00-9e17-3d5f0d0ad022 00:17:54.068 09:27:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=0cb75445-58f9-4e00-9e17-3d5f0d0ad022 00:17:54.068 09:27:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:54.068 09:27:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:17:54.068 09:27:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:54.068 09:27:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:54.068 09:27:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:54.326 09:27:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0cb75445-58f9-4e00-9e17-3d5f0d0ad022 -t 2000 00:17:54.583 [ 00:17:54.583 { 00:17:54.583 "name": "0cb75445-58f9-4e00-9e17-3d5f0d0ad022", 00:17:54.583 "aliases": [ 00:17:54.583 "lvs/lvol" 00:17:54.583 ], 00:17:54.583 "product_name": "Logical Volume", 00:17:54.583 "block_size": 4096, 00:17:54.583 "num_blocks": 38912, 00:17:54.583 "uuid": "0cb75445-58f9-4e00-9e17-3d5f0d0ad022", 00:17:54.583 "assigned_rate_limits": { 00:17:54.583 "rw_ios_per_sec": 0, 00:17:54.583 "rw_mbytes_per_sec": 0, 00:17:54.583 "r_mbytes_per_sec": 0, 00:17:54.583 "w_mbytes_per_sec": 0 00:17:54.583 }, 00:17:54.583 "claimed": false, 00:17:54.583 "zoned": false, 00:17:54.583 "supported_io_types": { 00:17:54.583 "read": true, 00:17:54.583 "write": true, 00:17:54.583 "unmap": true, 00:17:54.583 "flush": false, 00:17:54.583 "reset": true, 00:17:54.583 "nvme_admin": false, 00:17:54.583 "nvme_io": false, 00:17:54.583 "nvme_io_md": false, 00:17:54.583 "write_zeroes": true, 00:17:54.583 "zcopy": false, 00:17:54.583 "get_zone_info": false, 00:17:54.583 "zone_management": false, 00:17:54.583 "zone_append": false, 00:17:54.583 "compare": false, 00:17:54.583 "compare_and_write": false, 00:17:54.583 "abort": false, 00:17:54.583 "seek_hole": true, 00:17:54.583 "seek_data": true, 00:17:54.583 "copy": false, 00:17:54.583 "nvme_iov_md": false 00:17:54.583 }, 00:17:54.583 "driver_specific": { 00:17:54.583 "lvol": { 00:17:54.583 "lvol_store_uuid": "86b69cac-0a72-4d54-8f36-f9f151b5d0e2", 00:17:54.583 "base_bdev": "aio_bdev", 00:17:54.583 "thin_provision": false, 00:17:54.583 "num_allocated_clusters": 38, 00:17:54.583 "snapshot": false, 00:17:54.583 "clone": false, 00:17:54.583 "esnap_clone": false 00:17:54.583 } 00:17:54.583 } 00:17:54.583 } 00:17:54.583 ] 00:17:54.583 09:27:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:17:54.583 09:27:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86b69cac-0a72-4d54-8f36-f9f151b5d0e2 00:17:54.583 09:27:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:54.841 09:27:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:54.841 09:27:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86b69cac-0a72-4d54-8f36-f9f151b5d0e2 00:17:54.841 09:27:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:55.097 09:27:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:55.097 09:27:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0cb75445-58f9-4e00-9e17-3d5f0d0ad022 00:17:55.354 09:27:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 86b69cac-0a72-4d54-8f36-f9f151b5d0e2 00:17:55.612 09:27:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:55.870 09:27:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:55.870 00:17:55.870 real 0m19.169s 00:17:55.870 user 0m48.552s 00:17:55.870 sys 0m4.752s 00:17:55.870 09:27:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:55.870 09:27:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:55.870 ************************************ 00:17:55.870 END TEST lvs_grow_dirty 00:17:55.870 ************************************ 00:17:55.870 09:27:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:17:55.870 09:27:40 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:55.870 09:27:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:17:55.870 09:27:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:17:55.870 09:27:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:17:55.870 09:27:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:55.870 09:27:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:17:55.870 09:27:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:17:55.870 09:27:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:17:55.870 09:27:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:55.870 nvmf_trace.0 00:17:55.870 09:27:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:17:55.870 09:27:40 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:55.870 09:27:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:55.870 09:27:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:17:55.870 09:27:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:55.870 09:27:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:17:55.870 09:27:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:55.870 09:27:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:55.870 rmmod nvme_tcp 00:17:55.870 rmmod nvme_fabrics 00:17:55.870 rmmod nvme_keyring 00:17:56.128 09:27:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:56.128 09:27:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:17:56.128 09:27:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:17:56.128 09:27:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 724429 ']' 00:17:56.128 09:27:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 724429 00:17:56.128 09:27:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 724429 ']' 00:17:56.128 09:27:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 724429 00:17:56.128 09:27:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:17:56.128 09:27:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:56.128 09:27:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 724429 00:17:56.128 09:27:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:56.128 09:27:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:56.128 09:27:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 724429' 00:17:56.128 killing process with pid 724429 00:17:56.128 09:27:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 724429 00:17:56.128 09:27:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 724429 00:17:56.387 09:27:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:56.387 09:27:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:56.387 09:27:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:56.387 09:27:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:56.387 09:27:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:56.387 09:27:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:56.387 09:27:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:56.387 09:27:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:58.317 09:27:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:58.317 00:17:58.317 real 0m42.133s 00:17:58.317 user 1m11.452s 00:17:58.317 sys 0m8.511s 00:17:58.317 09:27:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:58.317 09:27:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:58.317 ************************************ 00:17:58.317 END TEST nvmf_lvs_grow 00:17:58.317 ************************************ 00:17:58.317 09:27:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:58.317 09:27:42 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:58.317 09:27:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:58.317 09:27:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:58.317 09:27:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:58.317 ************************************ 00:17:58.317 START TEST nvmf_bdev_io_wait 00:17:58.317 ************************************ 00:17:58.317 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:58.317 * Looking for test storage... 00:17:58.318 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:17:58.318 09:27:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:00.866 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:00.866 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:00.866 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:00.866 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:00.866 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:00.866 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:00.866 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:18:00.866 00:18:00.866 --- 10.0.0.2 ping statistics --- 00:18:00.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.867 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:18:00.867 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:00.867 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:00.867 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:18:00.867 00:18:00.867 --- 10.0.0.1 ping statistics --- 00:18:00.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.867 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:18:00.867 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:00.867 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:18:00.867 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:00.867 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:00.867 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:00.867 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:00.867 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:00.867 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:00.867 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:00.867 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:00.867 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:00.867 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:00.867 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:00.867 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=726951 00:18:00.867 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:00.867 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 726951 00:18:00.867 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 726951 ']' 00:18:00.867 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.867 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:00.867 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.867 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:00.867 09:27:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:00.867 [2024-07-14 09:27:44.974974] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:00.867 [2024-07-14 09:27:44.975052] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.867 EAL: No free 2048 kB hugepages reported on node 1 00:18:00.867 [2024-07-14 09:27:45.038328] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:00.867 [2024-07-14 09:27:45.128630] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:00.867 [2024-07-14 09:27:45.128685] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:00.867 [2024-07-14 09:27:45.128699] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:00.867 [2024-07-14 09:27:45.128710] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:00.867 [2024-07-14 09:27:45.128720] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:00.867 [2024-07-14 09:27:45.128769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:00.867 [2024-07-14 09:27:45.128830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:00.867 [2024-07-14 09:27:45.128896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:00.867 [2024-07-14 09:27:45.128899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.867 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:00.867 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:18:00.867 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:00.867 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:00.867 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:00.867 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:00.867 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:18:00.867 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.867 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:00.867 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.867 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:18:00.867 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.867 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:00.867 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.867 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:00.867 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.867 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:00.867 [2024-07-14 09:27:45.287768] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:00.867 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.867 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:00.867 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.867 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:01.126 Malloc0 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:01.126 [2024-07-14 09:27:45.354548] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=726981 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=726982 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=726985 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:01.126 { 00:18:01.126 "params": { 00:18:01.126 "name": "Nvme$subsystem", 00:18:01.126 "trtype": "$TEST_TRANSPORT", 00:18:01.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:01.126 "adrfam": "ipv4", 00:18:01.126 "trsvcid": "$NVMF_PORT", 00:18:01.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:01.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:01.126 "hdgst": ${hdgst:-false}, 00:18:01.126 "ddgst": ${ddgst:-false} 00:18:01.126 }, 00:18:01.126 "method": "bdev_nvme_attach_controller" 00:18:01.126 } 00:18:01.126 EOF 00:18:01.126 )") 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:01.126 { 00:18:01.126 "params": { 00:18:01.126 "name": "Nvme$subsystem", 00:18:01.126 "trtype": "$TEST_TRANSPORT", 00:18:01.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:01.126 "adrfam": "ipv4", 00:18:01.126 "trsvcid": "$NVMF_PORT", 00:18:01.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:01.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:01.126 "hdgst": ${hdgst:-false}, 00:18:01.126 "ddgst": ${ddgst:-false} 00:18:01.126 }, 00:18:01.126 "method": "bdev_nvme_attach_controller" 00:18:01.126 } 00:18:01.126 EOF 00:18:01.126 )") 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=726987 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:01.126 { 00:18:01.126 "params": { 00:18:01.126 "name": "Nvme$subsystem", 00:18:01.126 "trtype": "$TEST_TRANSPORT", 00:18:01.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:01.126 "adrfam": "ipv4", 00:18:01.126 "trsvcid": "$NVMF_PORT", 00:18:01.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:01.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:01.126 "hdgst": ${hdgst:-false}, 00:18:01.126 "ddgst": ${ddgst:-false} 00:18:01.126 }, 00:18:01.126 "method": "bdev_nvme_attach_controller" 00:18:01.126 } 00:18:01.126 EOF 00:18:01.126 )") 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:01.126 { 00:18:01.126 "params": { 00:18:01.126 "name": "Nvme$subsystem", 00:18:01.126 "trtype": "$TEST_TRANSPORT", 00:18:01.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:01.126 "adrfam": "ipv4", 00:18:01.126 "trsvcid": "$NVMF_PORT", 00:18:01.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:01.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:01.126 "hdgst": ${hdgst:-false}, 00:18:01.126 "ddgst": ${ddgst:-false} 00:18:01.126 }, 00:18:01.126 "method": "bdev_nvme_attach_controller" 00:18:01.126 } 00:18:01.126 EOF 00:18:01.126 )") 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 726981 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:01.126 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:01.126 "params": { 00:18:01.126 "name": "Nvme1", 00:18:01.126 "trtype": "tcp", 00:18:01.127 "traddr": "10.0.0.2", 00:18:01.127 "adrfam": "ipv4", 00:18:01.127 "trsvcid": "4420", 00:18:01.127 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:01.127 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:01.127 "hdgst": false, 00:18:01.127 "ddgst": false 00:18:01.127 }, 00:18:01.127 "method": "bdev_nvme_attach_controller" 00:18:01.127 }' 00:18:01.127 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:01.127 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:01.127 "params": { 00:18:01.127 "name": "Nvme1", 00:18:01.127 "trtype": "tcp", 00:18:01.127 "traddr": "10.0.0.2", 00:18:01.127 "adrfam": "ipv4", 00:18:01.127 "trsvcid": "4420", 00:18:01.127 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:01.127 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:01.127 "hdgst": false, 00:18:01.127 "ddgst": false 00:18:01.127 }, 00:18:01.127 "method": "bdev_nvme_attach_controller" 00:18:01.127 }' 00:18:01.127 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:01.127 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:01.127 "params": { 00:18:01.127 "name": "Nvme1", 00:18:01.127 "trtype": "tcp", 00:18:01.127 "traddr": "10.0.0.2", 00:18:01.127 "adrfam": "ipv4", 00:18:01.127 "trsvcid": "4420", 00:18:01.127 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:01.127 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:01.127 "hdgst": false, 00:18:01.127 "ddgst": false 00:18:01.127 }, 00:18:01.127 "method": "bdev_nvme_attach_controller" 00:18:01.127 }' 00:18:01.127 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:01.127 09:27:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:01.127 "params": { 00:18:01.127 "name": "Nvme1", 00:18:01.127 "trtype": "tcp", 00:18:01.127 "traddr": "10.0.0.2", 00:18:01.127 "adrfam": "ipv4", 00:18:01.127 "trsvcid": "4420", 00:18:01.127 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:01.127 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:01.127 "hdgst": false, 00:18:01.127 "ddgst": false 00:18:01.127 }, 00:18:01.127 "method": "bdev_nvme_attach_controller" 00:18:01.127 }' 00:18:01.127 [2024-07-14 09:27:45.401274] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:01.127 [2024-07-14 09:27:45.401350] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:18:01.127 [2024-07-14 09:27:45.402386] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:01.127 [2024-07-14 09:27:45.402390] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:01.127 [2024-07-14 09:27:45.402467] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-14 09:27:45.402467] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:18:01.127 --proc-type=auto ] 00:18:01.127 [2024-07-14 09:27:45.402608] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:01.127 [2024-07-14 09:27:45.402687] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:18:01.127 EAL: No free 2048 kB hugepages reported on node 1 00:18:01.127 EAL: No free 2048 kB hugepages reported on node 1 00:18:01.127 [2024-07-14 09:27:45.574903] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.385 EAL: No free 2048 kB hugepages reported on node 1 00:18:01.385 [2024-07-14 09:27:45.651790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:18:01.385 [2024-07-14 09:27:45.678370] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.385 EAL: No free 2048 kB hugepages reported on node 1 00:18:01.385 [2024-07-14 09:27:45.756119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:01.385 [2024-07-14 09:27:45.782372] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.643 [2024-07-14 09:27:45.857574] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.643 [2024-07-14 09:27:45.862093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:18:01.643 [2024-07-14 09:27:45.929094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:18:01.643 Running I/O for 1 seconds... 00:18:01.901 Running I/O for 1 seconds... 00:18:01.901 Running I/O for 1 seconds... 00:18:01.901 Running I/O for 1 seconds... 00:18:02.859 00:18:02.859 Latency(us) 00:18:02.859 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.859 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:18:02.859 Nvme1n1 : 1.00 188785.77 737.44 0.00 0.00 675.26 294.31 976.97 00:18:02.859 =================================================================================================================== 00:18:02.859 Total : 188785.77 737.44 0.00 0.00 675.26 294.31 976.97 00:18:02.859 00:18:02.859 Latency(us) 00:18:02.859 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.859 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:18:02.859 Nvme1n1 : 1.02 7475.78 29.20 0.00 0.00 16970.44 7136.14 26020.22 00:18:02.859 =================================================================================================================== 00:18:02.859 Total : 7475.78 29.20 0.00 0.00 16970.44 7136.14 26020.22 00:18:02.859 00:18:02.859 Latency(us) 00:18:02.859 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.859 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:18:02.859 Nvme1n1 : 1.02 7553.66 29.51 0.00 0.00 16800.75 8107.05 25826.04 00:18:02.859 =================================================================================================================== 00:18:02.859 Total : 7553.66 29.51 0.00 0.00 16800.75 8107.05 25826.04 00:18:02.859 00:18:02.859 Latency(us) 00:18:02.859 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.859 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:18:02.859 Nvme1n1 : 1.01 7094.81 27.71 0.00 0.00 17977.77 6747.78 39418.69 00:18:02.859 =================================================================================================================== 00:18:02.859 Total : 7094.81 27.71 0.00 0.00 17977.77 6747.78 39418.69 00:18:03.118 09:27:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 726982 00:18:03.118 09:27:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 726985 00:18:03.118 09:27:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 726987 00:18:03.118 09:27:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:03.118 09:27:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.118 09:27:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:03.118 09:27:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.118 09:27:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:18:03.118 09:27:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:18:03.118 09:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:03.118 09:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:18:03.118 09:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:03.118 09:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:18:03.118 09:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:03.118 09:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:03.118 rmmod nvme_tcp 00:18:03.376 rmmod nvme_fabrics 00:18:03.376 rmmod nvme_keyring 00:18:03.376 09:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:03.376 09:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:18:03.376 09:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:18:03.376 09:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 726951 ']' 00:18:03.376 09:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 726951 00:18:03.377 09:27:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 726951 ']' 00:18:03.377 09:27:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 726951 00:18:03.377 09:27:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:18:03.377 09:27:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:03.377 09:27:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 726951 00:18:03.377 09:27:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:03.377 09:27:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:03.377 09:27:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 726951' 00:18:03.377 killing process with pid 726951 00:18:03.377 09:27:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 726951 00:18:03.377 09:27:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 726951 00:18:03.636 09:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:03.636 09:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:03.636 09:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:03.636 09:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:03.636 09:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:03.636 09:27:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:03.636 09:27:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:03.636 09:27:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:05.541 09:27:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:05.541 00:18:05.541 real 0m7.189s 00:18:05.541 user 0m16.728s 00:18:05.541 sys 0m3.466s 00:18:05.541 09:27:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:05.541 09:27:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:05.541 ************************************ 00:18:05.541 END TEST nvmf_bdev_io_wait 00:18:05.541 ************************************ 00:18:05.541 09:27:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:05.541 09:27:49 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:05.541 09:27:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:05.541 09:27:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:05.541 09:27:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:05.541 ************************************ 00:18:05.541 START TEST nvmf_queue_depth 00:18:05.541 ************************************ 00:18:05.541 09:27:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:05.541 * Looking for test storage... 00:18:05.541 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:05.541 09:27:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:05.541 09:27:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:18:05.541 09:27:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:05.541 09:27:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:05.541 09:27:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:05.541 09:27:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:05.541 09:27:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:05.541 09:27:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:05.541 09:27:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:05.541 09:27:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:05.541 09:27:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:05.541 09:27:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:05.541 09:27:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:05.541 09:27:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:05.541 09:27:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:05.541 09:27:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:05.541 09:27:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:05.541 09:27:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:05.541 09:27:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:05.799 09:27:49 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:05.799 09:27:49 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:05.799 09:27:49 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:05.800 09:27:49 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.800 09:27:49 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.800 09:27:49 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.800 09:27:49 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:18:05.800 09:27:49 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.800 09:27:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:18:05.800 09:27:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:05.800 09:27:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:05.800 09:27:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:05.800 09:27:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:05.800 09:27:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:05.800 09:27:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:05.800 09:27:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:05.800 09:27:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:05.800 09:27:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:18:05.800 09:27:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:18:05.800 09:27:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:05.800 09:27:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:18:05.800 09:27:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:05.800 09:27:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:05.800 09:27:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:05.800 09:27:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:05.800 09:27:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:05.800 09:27:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:05.800 09:27:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:05.800 09:27:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:05.800 09:27:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:05.800 09:27:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:05.800 09:27:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:18:05.800 09:27:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:07.702 09:27:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:07.702 09:27:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:18:07.702 09:27:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:07.702 09:27:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:07.702 09:27:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:07.702 09:27:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:07.702 09:27:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:07.702 09:27:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:18:07.702 09:27:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:07.702 09:27:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:18:07.702 09:27:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:18:07.702 09:27:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:18:07.702 09:27:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:07.702 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:07.702 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:07.702 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:07.702 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:07.702 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:07.703 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:07.703 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:07.703 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:07.703 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:07.703 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:07.703 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:07.703 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:07.703 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:07.703 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:07.703 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:07.703 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:07.703 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:07.703 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:07.703 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:07.703 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:07.703 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:07.703 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:07.703 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:07.703 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:07.703 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:07.703 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:18:07.703 00:18:07.703 --- 10.0.0.2 ping statistics --- 00:18:07.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.703 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:18:07.703 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:07.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:07.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:18:07.703 00:18:07.703 --- 10.0.0.1 ping statistics --- 00:18:07.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.703 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:18:07.703 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:07.703 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:18:07.703 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:07.703 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:07.703 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:07.703 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:07.703 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:07.703 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:07.703 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:07.960 09:27:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:18:07.960 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:07.960 09:27:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:07.960 09:27:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:07.960 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=729206 00:18:07.960 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:07.960 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 729206 00:18:07.960 09:27:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 729206 ']' 00:18:07.960 09:27:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.960 09:27:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:07.960 09:27:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.960 09:27:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:07.960 09:27:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:07.960 [2024-07-14 09:27:52.227309] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:07.960 [2024-07-14 09:27:52.227384] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:07.960 EAL: No free 2048 kB hugepages reported on node 1 00:18:07.960 [2024-07-14 09:27:52.290208] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.960 [2024-07-14 09:27:52.373265] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:07.960 [2024-07-14 09:27:52.373320] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:07.960 [2024-07-14 09:27:52.373344] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:07.960 [2024-07-14 09:27:52.373356] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:07.960 [2024-07-14 09:27:52.373367] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:07.960 [2024-07-14 09:27:52.373424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:08.218 09:27:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:08.218 09:27:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:18:08.218 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:08.218 09:27:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:08.218 09:27:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:08.218 09:27:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:08.218 09:27:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:08.218 09:27:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.218 09:27:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:08.218 [2024-07-14 09:27:52.505714] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:08.218 09:27:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.218 09:27:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:08.218 09:27:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.218 09:27:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:08.218 Malloc0 00:18:08.218 09:27:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.218 09:27:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:08.218 09:27:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.218 09:27:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:08.218 09:27:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.218 09:27:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:08.218 09:27:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.218 09:27:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:08.218 09:27:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.218 09:27:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:08.218 09:27:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.218 09:27:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:08.218 [2024-07-14 09:27:52.566933] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:08.218 09:27:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.218 09:27:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=729342 00:18:08.218 09:27:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:18:08.218 09:27:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:08.218 09:27:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 729342 /var/tmp/bdevperf.sock 00:18:08.218 09:27:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 729342 ']' 00:18:08.218 09:27:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:08.218 09:27:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:08.218 09:27:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:08.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:08.218 09:27:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:08.218 09:27:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:08.218 [2024-07-14 09:27:52.612002] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:08.218 [2024-07-14 09:27:52.612066] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid729342 ] 00:18:08.218 EAL: No free 2048 kB hugepages reported on node 1 00:18:08.476 [2024-07-14 09:27:52.673271] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.476 [2024-07-14 09:27:52.764980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.476 09:27:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:08.476 09:27:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:18:08.476 09:27:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:08.476 09:27:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.476 09:27:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:08.734 NVMe0n1 00:18:08.734 09:27:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.734 09:27:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:08.992 Running I/O for 10 seconds... 00:18:18.976 00:18:18.976 Latency(us) 00:18:18.976 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:18.976 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:18.976 Verification LBA range: start 0x0 length 0x4000 00:18:18.976 NVMe0n1 : 10.08 8465.11 33.07 0.00 0.00 120353.41 16311.18 77283.93 00:18:18.976 =================================================================================================================== 00:18:18.976 Total : 8465.11 33.07 0.00 0.00 120353.41 16311.18 77283.93 00:18:18.976 0 00:18:18.976 09:28:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 729342 00:18:18.976 09:28:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 729342 ']' 00:18:18.976 09:28:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 729342 00:18:18.976 09:28:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:18:18.976 09:28:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:18.976 09:28:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 729342 00:18:18.976 09:28:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:18.976 09:28:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:18.976 09:28:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 729342' 00:18:18.976 killing process with pid 729342 00:18:18.976 09:28:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 729342 00:18:18.976 Received shutdown signal, test time was about 10.000000 seconds 00:18:18.976 00:18:18.976 Latency(us) 00:18:18.976 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:18.976 =================================================================================================================== 00:18:18.976 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:18.976 09:28:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 729342 00:18:19.237 09:28:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:19.237 09:28:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:19.237 09:28:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:19.237 09:28:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:18:19.237 09:28:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:19.237 09:28:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:18:19.237 09:28:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:19.237 09:28:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:19.237 rmmod nvme_tcp 00:18:19.237 rmmod nvme_fabrics 00:18:19.237 rmmod nvme_keyring 00:18:19.237 09:28:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:19.237 09:28:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:18:19.237 09:28:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:18:19.237 09:28:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 729206 ']' 00:18:19.237 09:28:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 729206 00:18:19.237 09:28:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 729206 ']' 00:18:19.237 09:28:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 729206 00:18:19.237 09:28:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:18:19.237 09:28:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:19.237 09:28:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 729206 00:18:19.237 09:28:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:19.237 09:28:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:19.237 09:28:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 729206' 00:18:19.237 killing process with pid 729206 00:18:19.237 09:28:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 729206 00:18:19.237 09:28:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 729206 00:18:19.495 09:28:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:19.495 09:28:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:19.495 09:28:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:19.495 09:28:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:19.495 09:28:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:19.495 09:28:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.495 09:28:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:19.495 09:28:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.071 09:28:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:22.071 00:18:22.071 real 0m16.031s 00:18:22.071 user 0m22.509s 00:18:22.071 sys 0m3.119s 00:18:22.071 09:28:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:22.071 09:28:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:22.071 ************************************ 00:18:22.071 END TEST nvmf_queue_depth 00:18:22.071 ************************************ 00:18:22.071 09:28:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:22.071 09:28:05 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:22.071 09:28:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:22.071 09:28:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:22.071 09:28:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:22.071 ************************************ 00:18:22.071 START TEST nvmf_target_multipath 00:18:22.071 ************************************ 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:22.071 * Looking for test storage... 00:18:22.071 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:18:22.071 09:28:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:23.975 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:23.975 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:23.975 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:23.975 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:23.975 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:23.976 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:23.976 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:23.976 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:23.976 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:23.976 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:23.976 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:23.976 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:23.976 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:23.976 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:23.976 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:23.976 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:23.976 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:18:23.976 00:18:23.976 --- 10.0.0.2 ping statistics --- 00:18:23.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:23.976 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:18:23.976 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:23.976 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:23.976 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:18:23.976 00:18:23.976 --- 10.0.0.1 ping statistics --- 00:18:23.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:23.976 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:18:23.976 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:23.976 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:18:23.976 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:23.976 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:23.976 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:23.976 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:23.976 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:23.976 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:23.976 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:23.976 09:28:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:18:23.976 09:28:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:18:23.976 only one NIC for nvmf test 00:18:23.976 09:28:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:18:23.976 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:23.976 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:23.976 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:23.976 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:23.976 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:23.976 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:23.976 rmmod nvme_tcp 00:18:23.976 rmmod nvme_fabrics 00:18:23.976 rmmod nvme_keyring 00:18:23.976 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:23.976 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:23.976 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:23.976 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:23.976 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:23.976 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:23.976 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:23.976 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:23.976 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:23.976 09:28:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:23.976 09:28:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:23.976 09:28:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:26.511 09:28:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:26.511 09:28:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:18:26.511 09:28:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:18:26.511 09:28:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:26.511 09:28:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:26.511 09:28:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:26.511 09:28:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:26.511 09:28:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:26.511 09:28:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:26.511 09:28:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:26.511 09:28:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:26.511 09:28:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:26.511 09:28:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:26.511 09:28:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:26.511 09:28:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:26.511 09:28:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:26.511 09:28:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:26.511 09:28:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:26.511 09:28:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:26.511 09:28:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:26.511 09:28:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:26.511 09:28:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:26.511 00:18:26.511 real 0m4.346s 00:18:26.511 user 0m0.823s 00:18:26.511 sys 0m1.516s 00:18:26.511 09:28:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:26.511 09:28:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:26.511 ************************************ 00:18:26.511 END TEST nvmf_target_multipath 00:18:26.511 ************************************ 00:18:26.511 09:28:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:26.511 09:28:10 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:26.511 09:28:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:26.511 09:28:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:26.511 09:28:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:26.511 ************************************ 00:18:26.511 START TEST nvmf_zcopy 00:18:26.511 ************************************ 00:18:26.511 09:28:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:26.511 * Looking for test storage... 00:18:26.511 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:26.511 09:28:10 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:26.511 09:28:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:18:26.511 09:28:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:26.511 09:28:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:26.511 09:28:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:26.511 09:28:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:26.511 09:28:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:26.511 09:28:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:26.511 09:28:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:26.511 09:28:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:26.511 09:28:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:26.511 09:28:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:26.511 09:28:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:26.511 09:28:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:26.511 09:28:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:26.511 09:28:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:26.511 09:28:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:26.511 09:28:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:26.511 09:28:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:26.511 09:28:10 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:26.511 09:28:10 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:26.511 09:28:10 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:26.511 09:28:10 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.511 09:28:10 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.511 09:28:10 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.511 09:28:10 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:18:26.511 09:28:10 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.511 09:28:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:18:26.511 09:28:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:26.511 09:28:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:26.511 09:28:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:26.511 09:28:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:26.511 09:28:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:26.511 09:28:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:26.511 09:28:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:26.511 09:28:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:26.511 09:28:10 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:18:26.511 09:28:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:26.511 09:28:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:26.511 09:28:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:26.511 09:28:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:26.511 09:28:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:26.511 09:28:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:26.511 09:28:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:26.512 09:28:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:26.512 09:28:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:26.512 09:28:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:26.512 09:28:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:18:26.512 09:28:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:27.887 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:27.887 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:18:27.887 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:27.887 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:27.887 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:27.887 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:27.887 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:27.887 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:18:27.887 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:27.887 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:18:27.887 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:18:27.887 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:18:27.887 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:18:27.887 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:18:27.887 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:18:27.887 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:27.887 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:27.887 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:27.887 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:28.146 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:28.146 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:28.146 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:28.146 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:28.146 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:28.146 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:18:28.146 00:18:28.146 --- 10.0.0.2 ping statistics --- 00:18:28.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.146 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:28.146 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:28.146 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:18:28.146 00:18:28.146 --- 10.0.0.1 ping statistics --- 00:18:28.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.146 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=734390 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 734390 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 734390 ']' 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:28.146 09:28:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:28.146 [2024-07-14 09:28:12.564700] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:28.146 [2024-07-14 09:28:12.564800] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:28.146 EAL: No free 2048 kB hugepages reported on node 1 00:18:28.404 [2024-07-14 09:28:12.632999] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.404 [2024-07-14 09:28:12.724501] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:28.404 [2024-07-14 09:28:12.724565] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:28.404 [2024-07-14 09:28:12.724592] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:28.404 [2024-07-14 09:28:12.724606] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:28.404 [2024-07-14 09:28:12.724617] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:28.404 [2024-07-14 09:28:12.724657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.404 09:28:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:28.404 09:28:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:18:28.404 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:28.404 09:28:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:28.404 09:28:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:28.662 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:28.662 09:28:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:28.662 09:28:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:28.662 09:28:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.662 09:28:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:28.662 [2024-07-14 09:28:12.870170] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:28.662 09:28:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.662 09:28:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:28.662 09:28:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.662 09:28:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:28.662 09:28:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.662 09:28:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:28.662 09:28:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.662 09:28:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:28.662 [2024-07-14 09:28:12.886339] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:28.662 09:28:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.662 09:28:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:28.662 09:28:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.662 09:28:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:28.662 09:28:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.662 09:28:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:28.662 09:28:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.662 09:28:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:28.662 malloc0 00:18:28.662 09:28:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.662 09:28:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:28.662 09:28:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.662 09:28:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:28.662 09:28:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.662 09:28:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:28.662 09:28:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:28.662 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:28.662 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:28.662 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:28.662 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:28.662 { 00:18:28.662 "params": { 00:18:28.662 "name": "Nvme$subsystem", 00:18:28.662 "trtype": "$TEST_TRANSPORT", 00:18:28.662 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:28.662 "adrfam": "ipv4", 00:18:28.662 "trsvcid": "$NVMF_PORT", 00:18:28.662 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:28.662 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:28.662 "hdgst": ${hdgst:-false}, 00:18:28.662 "ddgst": ${ddgst:-false} 00:18:28.662 }, 00:18:28.662 "method": "bdev_nvme_attach_controller" 00:18:28.662 } 00:18:28.662 EOF 00:18:28.662 )") 00:18:28.662 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:28.662 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:28.662 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:28.662 09:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:28.662 "params": { 00:18:28.662 "name": "Nvme1", 00:18:28.662 "trtype": "tcp", 00:18:28.662 "traddr": "10.0.0.2", 00:18:28.662 "adrfam": "ipv4", 00:18:28.662 "trsvcid": "4420", 00:18:28.662 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.662 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:28.662 "hdgst": false, 00:18:28.662 "ddgst": false 00:18:28.662 }, 00:18:28.662 "method": "bdev_nvme_attach_controller" 00:18:28.662 }' 00:18:28.662 [2024-07-14 09:28:12.966456] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:28.662 [2024-07-14 09:28:12.966538] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid734480 ] 00:18:28.662 EAL: No free 2048 kB hugepages reported on node 1 00:18:28.662 [2024-07-14 09:28:13.031942] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.920 [2024-07-14 09:28:13.127283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.920 Running I/O for 10 seconds... 00:18:41.115 00:18:41.115 Latency(us) 00:18:41.115 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.115 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:41.115 Verification LBA range: start 0x0 length 0x1000 00:18:41.115 Nvme1n1 : 10.05 5783.32 45.18 0.00 0.00 21982.51 1893.26 42137.22 00:18:41.115 =================================================================================================================== 00:18:41.115 Total : 5783.32 45.18 0.00 0.00 21982.51 1893.26 42137.22 00:18:41.115 09:28:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=735725 00:18:41.115 09:28:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:18:41.115 09:28:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:41.115 09:28:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:41.115 09:28:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:41.115 09:28:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:41.115 09:28:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:41.115 09:28:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:41.115 09:28:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:41.115 { 00:18:41.115 "params": { 00:18:41.115 "name": "Nvme$subsystem", 00:18:41.115 "trtype": "$TEST_TRANSPORT", 00:18:41.115 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:41.115 "adrfam": "ipv4", 00:18:41.115 "trsvcid": "$NVMF_PORT", 00:18:41.115 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:41.115 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:41.115 "hdgst": ${hdgst:-false}, 00:18:41.115 "ddgst": ${ddgst:-false} 00:18:41.115 }, 00:18:41.115 "method": "bdev_nvme_attach_controller" 00:18:41.115 } 00:18:41.115 EOF 00:18:41.115 )") 00:18:41.115 09:28:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:41.115 09:28:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:41.115 [2024-07-14 09:28:23.612932] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.115 [2024-07-14 09:28:23.612975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.115 09:28:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:41.115 09:28:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:41.115 "params": { 00:18:41.115 "name": "Nvme1", 00:18:41.115 "trtype": "tcp", 00:18:41.115 "traddr": "10.0.0.2", 00:18:41.115 "adrfam": "ipv4", 00:18:41.115 "trsvcid": "4420", 00:18:41.115 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.115 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:41.115 "hdgst": false, 00:18:41.115 "ddgst": false 00:18:41.115 }, 00:18:41.115 "method": "bdev_nvme_attach_controller" 00:18:41.115 }' 00:18:41.115 [2024-07-14 09:28:23.620890] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.115 [2024-07-14 09:28:23.620932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.115 [2024-07-14 09:28:23.628899] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.115 [2024-07-14 09:28:23.628937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.115 [2024-07-14 09:28:23.636927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:23.636950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:23.644656] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:41.116 [2024-07-14 09:28:23.644729] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid735725 ] 00:18:41.116 [2024-07-14 09:28:23.644949] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:23.644972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:23.652971] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:23.652993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:23.660983] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:23.661005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:23.669004] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:23.669026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 EAL: No free 2048 kB hugepages reported on node 1 00:18:41.116 [2024-07-14 09:28:23.677025] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:23.677047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:23.685046] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:23.685067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:23.693082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:23.693103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:23.701087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:23.701108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:23.707224] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.116 [2024-07-14 09:28:23.709113] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:23.709135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:23.717183] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:23.717237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:23.725175] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:23.725198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:23.733192] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:23.733238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:23.741226] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:23.741252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:23.749245] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:23.749272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:23.757294] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:23.757328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:23.765329] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:23.765373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:23.773320] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:23.773345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:23.781343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:23.781369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:23.789364] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:23.789389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:23.797385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:23.797410] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:23.800556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.116 [2024-07-14 09:28:23.805407] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:23.805431] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:23.813433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:23.813458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:23.821487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:23.821529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:23.829502] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:23.829543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:23.837525] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:23.837567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:23.845547] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:23.845589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:23.853568] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:23.853608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:23.861596] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:23.861640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:23.869591] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:23.869620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:23.877621] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:23.877653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:23.885653] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:23.885693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:23.893679] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:23.893718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:23.901672] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:23.901698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:23.909694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:23.909719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:23.917728] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:23.917759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:23.925769] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:23.925797] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:23.933781] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:23.933809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:23.941804] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:23.941832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:23.949849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:23.949886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:23.957858] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:23.957893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:23.965887] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:23.965914] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:23.973931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:23.973953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:23.981932] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:23.981957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:23.989962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:23.989983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:23.997983] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:23.998007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:24.005999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:24.006021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:24.014018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:24.014043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:24.022039] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:24.022065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:24.030047] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:24.030076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:24.038074] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:24.038097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:24.046092] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:24.046126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:24.054114] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:24.054155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:24.062149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:24.062170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:24.070174] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:24.070195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:24.078194] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:24.078232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:24.086233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:24.086259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:24.094250] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:24.094276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:24.102284] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:24.102314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 Running I/O for 5 seconds... 00:18:41.116 [2024-07-14 09:28:24.110301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:24.110328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:24.124486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:24.124516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:24.135326] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:24.135360] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:24.147106] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:24.147135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:24.158323] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:24.158352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:24.169216] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:24.169244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.116 [2024-07-14 09:28:24.180718] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.116 [2024-07-14 09:28:24.180746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.191542] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.191570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.203485] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.203517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.214652] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.214689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.225848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.225885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.237270] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.237299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.248314] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.248345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.259382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.259412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.270184] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.270214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.281420] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.281448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.292488] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.292516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.303193] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.303222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.313611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.313640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.324529] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.324558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.334875] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.334902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.345663] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.345691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.356475] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.356504] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.366617] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.366648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.376534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.376562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.387015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.387043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.397514] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.397543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.408111] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.408140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.418198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.418234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.429571] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.429599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.438984] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.439012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.450021] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.450049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.459851] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.459888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.470601] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.470629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.481249] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.481277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.491873] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.491922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.502053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.502081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.513318] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.513346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.523881] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.523910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.533822] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.533851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.544944] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.544972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.555348] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.555376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.565770] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.565798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.576376] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.576405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.587216] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.587245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.597836] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.597872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.608479] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.608507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.618720] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.618755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.629924] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.629953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.640090] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.640119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.650567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.650595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.661331] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.661374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.671749] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.671777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.682495] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.682523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.693178] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.693206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.705442] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.705470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.714955] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.714983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.726444] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.726473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.737598] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.737628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.748618] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.748648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.758936] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.758965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.770271] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.770300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.780516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.780544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.791594] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.791622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.801974] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.117 [2024-07-14 09:28:24.802001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.117 [2024-07-14 09:28:24.812392] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:24.812419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:24.825214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:24.825241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:24.834953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:24.834981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:24.845704] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:24.845732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:24.855895] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:24.855923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:24.866022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:24.866050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:24.876170] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:24.876197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:24.886098] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:24.886127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:24.896711] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:24.896739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:24.907342] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:24.907370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:24.917609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:24.917637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:24.928089] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:24.928117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:24.938319] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:24.938347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:24.948752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:24.948779] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:24.959263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:24.959291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:24.969462] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:24.969490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:24.979407] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:24.979434] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:24.990411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:24.990439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:25.000738] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:25.000765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:25.011550] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:25.011578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:25.021927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:25.021955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:25.032012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:25.032040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:25.042405] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:25.042433] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:25.052480] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:25.052508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:25.062623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:25.062652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:25.072725] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:25.072753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:25.083194] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:25.083222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:25.093287] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:25.093314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:25.104106] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:25.104134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:25.114373] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:25.114401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:25.125228] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:25.125256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:25.135937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:25.135964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:25.146412] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:25.146440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:25.159028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:25.159056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:25.168948] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:25.168975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:25.179731] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:25.179759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:25.190172] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:25.190199] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:25.200812] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:25.200839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:25.211275] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:25.211317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:25.221849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:25.221884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:25.232225] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:25.232252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:25.245292] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:25.245320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:25.254850] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:25.254886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:25.265926] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:25.265954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:25.276253] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:25.276280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:25.286890] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:25.286918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:25.297539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:25.297566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:25.310482] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:25.310510] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:25.319858] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:25.319894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:25.330808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:25.330836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:25.341529] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:25.341557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:25.353812] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:25.353840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:25.363145] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:25.363173] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:25.374943] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:25.374971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:25.386201] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:25.386232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:25.399492] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:25.399521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:25.409578] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:25.409606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:25.420610] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:25.420647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.118 [2024-07-14 09:28:25.431678] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.118 [2024-07-14 09:28:25.431708] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.119 [2024-07-14 09:28:25.442779] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.119 [2024-07-14 09:28:25.442807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.119 [2024-07-14 09:28:25.454147] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.119 [2024-07-14 09:28:25.454175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.119 [2024-07-14 09:28:25.465338] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.119 [2024-07-14 09:28:25.465367] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.119 [2024-07-14 09:28:25.475930] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.119 [2024-07-14 09:28:25.475960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.119 [2024-07-14 09:28:25.486644] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.119 [2024-07-14 09:28:25.486672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.119 [2024-07-14 09:28:25.497279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.119 [2024-07-14 09:28:25.497307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.119 [2024-07-14 09:28:25.507791] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.119 [2024-07-14 09:28:25.507820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.119 [2024-07-14 09:28:25.520247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.119 [2024-07-14 09:28:25.520275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.119 [2024-07-14 09:28:25.530052] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.119 [2024-07-14 09:28:25.530080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.119 [2024-07-14 09:28:25.541686] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.119 [2024-07-14 09:28:25.541714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.119 [2024-07-14 09:28:25.552722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.119 [2024-07-14 09:28:25.552751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.119 [2024-07-14 09:28:25.565784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.119 [2024-07-14 09:28:25.565814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.377 [2024-07-14 09:28:25.575616] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.377 [2024-07-14 09:28:25.575645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.377 [2024-07-14 09:28:25.586481] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.377 [2024-07-14 09:28:25.586509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.377 [2024-07-14 09:28:25.596667] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.377 [2024-07-14 09:28:25.596696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.377 [2024-07-14 09:28:25.606936] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.377 [2024-07-14 09:28:25.606964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.377 [2024-07-14 09:28:25.617055] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.377 [2024-07-14 09:28:25.617083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.377 [2024-07-14 09:28:25.627689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.377 [2024-07-14 09:28:25.627724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.377 [2024-07-14 09:28:25.638775] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.377 [2024-07-14 09:28:25.638803] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.377 [2024-07-14 09:28:25.649086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.377 [2024-07-14 09:28:25.649115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.377 [2024-07-14 09:28:25.659819] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.377 [2024-07-14 09:28:25.659846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.377 [2024-07-14 09:28:25.669751] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.377 [2024-07-14 09:28:25.669780] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.377 [2024-07-14 09:28:25.680809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.377 [2024-07-14 09:28:25.680837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.377 [2024-07-14 09:28:25.691646] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.377 [2024-07-14 09:28:25.691673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.377 [2024-07-14 09:28:25.701859] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.377 [2024-07-14 09:28:25.701895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.377 [2024-07-14 09:28:25.711627] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.377 [2024-07-14 09:28:25.711655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.377 [2024-07-14 09:28:25.722540] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.377 [2024-07-14 09:28:25.722568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.377 [2024-07-14 09:28:25.732954] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.377 [2024-07-14 09:28:25.732983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.377 [2024-07-14 09:28:25.743603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.377 [2024-07-14 09:28:25.743631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.377 [2024-07-14 09:28:25.754480] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.377 [2024-07-14 09:28:25.754507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.377 [2024-07-14 09:28:25.765069] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.377 [2024-07-14 09:28:25.765096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.377 [2024-07-14 09:28:25.777609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.377 [2024-07-14 09:28:25.777636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.377 [2024-07-14 09:28:25.787177] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.377 [2024-07-14 09:28:25.787206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.377 [2024-07-14 09:28:25.797739] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.377 [2024-07-14 09:28:25.797767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.377 [2024-07-14 09:28:25.808205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.377 [2024-07-14 09:28:25.808232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.377 [2024-07-14 09:28:25.820968] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.377 [2024-07-14 09:28:25.820996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.636 [2024-07-14 09:28:25.830830] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.636 [2024-07-14 09:28:25.830874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.636 [2024-07-14 09:28:25.842076] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.636 [2024-07-14 09:28:25.842106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.636 [2024-07-14 09:28:25.851930] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.636 [2024-07-14 09:28:25.851958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.636 [2024-07-14 09:28:25.862798] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.636 [2024-07-14 09:28:25.862826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.636 [2024-07-14 09:28:25.872960] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.636 [2024-07-14 09:28:25.872988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.636 [2024-07-14 09:28:25.883331] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.636 [2024-07-14 09:28:25.883359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.636 [2024-07-14 09:28:25.893279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.636 [2024-07-14 09:28:25.893308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.636 [2024-07-14 09:28:25.903965] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.636 [2024-07-14 09:28:25.903994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.636 [2024-07-14 09:28:25.913793] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.636 [2024-07-14 09:28:25.913822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.636 [2024-07-14 09:28:25.924879] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.636 [2024-07-14 09:28:25.924907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.636 [2024-07-14 09:28:25.935006] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.636 [2024-07-14 09:28:25.935035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.636 [2024-07-14 09:28:25.945331] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.636 [2024-07-14 09:28:25.945360] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.636 [2024-07-14 09:28:25.955778] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.636 [2024-07-14 09:28:25.955806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.636 [2024-07-14 09:28:25.966451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.636 [2024-07-14 09:28:25.966480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.636 [2024-07-14 09:28:25.977058] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.636 [2024-07-14 09:28:25.977086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.636 [2024-07-14 09:28:25.987158] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.636 [2024-07-14 09:28:25.987186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.636 [2024-07-14 09:28:25.997397] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.636 [2024-07-14 09:28:25.997426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.636 [2024-07-14 09:28:26.008167] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.636 [2024-07-14 09:28:26.008195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.636 [2024-07-14 09:28:26.018594] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.636 [2024-07-14 09:28:26.018622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.636 [2024-07-14 09:28:26.029288] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.636 [2024-07-14 09:28:26.029324] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.636 [2024-07-14 09:28:26.039267] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.636 [2024-07-14 09:28:26.039295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.636 [2024-07-14 09:28:26.049931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.636 [2024-07-14 09:28:26.049959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.636 [2024-07-14 09:28:26.059948] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.636 [2024-07-14 09:28:26.059976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.636 [2024-07-14 09:28:26.070934] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.636 [2024-07-14 09:28:26.070963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.636 [2024-07-14 09:28:26.081194] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.636 [2024-07-14 09:28:26.081223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.896 [2024-07-14 09:28:26.092002] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.896 [2024-07-14 09:28:26.092032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.896 [2024-07-14 09:28:26.102367] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.896 [2024-07-14 09:28:26.102395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.896 [2024-07-14 09:28:26.113210] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.896 [2024-07-14 09:28:26.113238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.896 [2024-07-14 09:28:26.123957] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.896 [2024-07-14 09:28:26.123985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.896 [2024-07-14 09:28:26.134230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.896 [2024-07-14 09:28:26.134258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.896 [2024-07-14 09:28:26.144679] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.896 [2024-07-14 09:28:26.144707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.896 [2024-07-14 09:28:26.155022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.896 [2024-07-14 09:28:26.155050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.896 [2024-07-14 09:28:26.165698] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.896 [2024-07-14 09:28:26.165727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.896 [2024-07-14 09:28:26.176092] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.896 [2024-07-14 09:28:26.176121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.896 [2024-07-14 09:28:26.186989] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.896 [2024-07-14 09:28:26.187017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.896 [2024-07-14 09:28:26.197097] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.896 [2024-07-14 09:28:26.197126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.896 [2024-07-14 09:28:26.207100] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.896 [2024-07-14 09:28:26.207128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.896 [2024-07-14 09:28:26.217558] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.896 [2024-07-14 09:28:26.217586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.896 [2024-07-14 09:28:26.227941] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.896 [2024-07-14 09:28:26.227976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.896 [2024-07-14 09:28:26.238827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.896 [2024-07-14 09:28:26.238855] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.896 [2024-07-14 09:28:26.248902] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.896 [2024-07-14 09:28:26.248930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.896 [2024-07-14 09:28:26.259790] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.896 [2024-07-14 09:28:26.259818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.896 [2024-07-14 09:28:26.269571] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.896 [2024-07-14 09:28:26.269598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.896 [2024-07-14 09:28:26.280655] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.896 [2024-07-14 09:28:26.280683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.896 [2024-07-14 09:28:26.291375] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.896 [2024-07-14 09:28:26.291403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.896 [2024-07-14 09:28:26.301873] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.896 [2024-07-14 09:28:26.301901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.896 [2024-07-14 09:28:26.314682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.896 [2024-07-14 09:28:26.314711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.896 [2024-07-14 09:28:26.324289] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.896 [2024-07-14 09:28:26.324317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.896 [2024-07-14 09:28:26.334701] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.896 [2024-07-14 09:28:26.334728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.896 [2024-07-14 09:28:26.345297] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.896 [2024-07-14 09:28:26.345325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.155 [2024-07-14 09:28:26.357747] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.155 [2024-07-14 09:28:26.357775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.155 [2024-07-14 09:28:26.367217] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.155 [2024-07-14 09:28:26.367245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.155 [2024-07-14 09:28:26.378146] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.155 [2024-07-14 09:28:26.378174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.155 [2024-07-14 09:28:26.388080] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.155 [2024-07-14 09:28:26.388108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.155 [2024-07-14 09:28:26.398695] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.155 [2024-07-14 09:28:26.398723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.155 [2024-07-14 09:28:26.411142] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.155 [2024-07-14 09:28:26.411170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.155 [2024-07-14 09:28:26.420207] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.155 [2024-07-14 09:28:26.420235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.155 [2024-07-14 09:28:26.431716] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.155 [2024-07-14 09:28:26.431745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.155 [2024-07-14 09:28:26.441661] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.155 [2024-07-14 09:28:26.441689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.155 [2024-07-14 09:28:26.452628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.155 [2024-07-14 09:28:26.452656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.155 [2024-07-14 09:28:26.463204] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.155 [2024-07-14 09:28:26.463231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.155 [2024-07-14 09:28:26.473736] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.155 [2024-07-14 09:28:26.473764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.155 [2024-07-14 09:28:26.486279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.155 [2024-07-14 09:28:26.486306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.155 [2024-07-14 09:28:26.495772] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.155 [2024-07-14 09:28:26.495799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.155 [2024-07-14 09:28:26.506835] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.155 [2024-07-14 09:28:26.506864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.155 [2024-07-14 09:28:26.517365] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.155 [2024-07-14 09:28:26.517393] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.155 [2024-07-14 09:28:26.529892] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.155 [2024-07-14 09:28:26.529920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.155 [2024-07-14 09:28:26.539332] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.155 [2024-07-14 09:28:26.539361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.155 [2024-07-14 09:28:26.550120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.155 [2024-07-14 09:28:26.550148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.155 [2024-07-14 09:28:26.560213] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.155 [2024-07-14 09:28:26.560240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.155 [2024-07-14 09:28:26.570526] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.155 [2024-07-14 09:28:26.570554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.155 [2024-07-14 09:28:26.580942] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.155 [2024-07-14 09:28:26.580971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.155 [2024-07-14 09:28:26.591741] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.155 [2024-07-14 09:28:26.591770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.155 [2024-07-14 09:28:26.602135] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.155 [2024-07-14 09:28:26.602164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.414 [2024-07-14 09:28:26.613120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.414 [2024-07-14 09:28:26.613149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.414 [2024-07-14 09:28:26.623972] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.414 [2024-07-14 09:28:26.624000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.414 [2024-07-14 09:28:26.634316] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.414 [2024-07-14 09:28:26.634344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.414 [2024-07-14 09:28:26.646482] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.414 [2024-07-14 09:28:26.646509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.414 [2024-07-14 09:28:26.655704] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.415 [2024-07-14 09:28:26.655732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.415 [2024-07-14 09:28:26.667058] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.415 [2024-07-14 09:28:26.667086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.415 [2024-07-14 09:28:26.680158] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.415 [2024-07-14 09:28:26.680186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.415 [2024-07-14 09:28:26.689612] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.415 [2024-07-14 09:28:26.689639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.415 [2024-07-14 09:28:26.700707] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.415 [2024-07-14 09:28:26.700735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.415 [2024-07-14 09:28:26.711438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.415 [2024-07-14 09:28:26.711466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.415 [2024-07-14 09:28:26.721797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.415 [2024-07-14 09:28:26.721825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.415 [2024-07-14 09:28:26.732447] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.415 [2024-07-14 09:28:26.732475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.415 [2024-07-14 09:28:26.742686] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.415 [2024-07-14 09:28:26.742714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.415 [2024-07-14 09:28:26.753676] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.415 [2024-07-14 09:28:26.753704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.415 [2024-07-14 09:28:26.764397] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.415 [2024-07-14 09:28:26.764425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.415 [2024-07-14 09:28:26.776437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.415 [2024-07-14 09:28:26.776464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.415 [2024-07-14 09:28:26.787911] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.415 [2024-07-14 09:28:26.787938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.415 [2024-07-14 09:28:26.796359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.415 [2024-07-14 09:28:26.796387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.415 [2024-07-14 09:28:26.809236] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.415 [2024-07-14 09:28:26.809264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.415 [2024-07-14 09:28:26.818560] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.415 [2024-07-14 09:28:26.818588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.415 [2024-07-14 09:28:26.829650] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.415 [2024-07-14 09:28:26.829678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.415 [2024-07-14 09:28:26.840510] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.415 [2024-07-14 09:28:26.840539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.415 [2024-07-14 09:28:26.850155] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.415 [2024-07-14 09:28:26.850183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.415 [2024-07-14 09:28:26.861115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.415 [2024-07-14 09:28:26.861143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.674 [2024-07-14 09:28:26.871851] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.674 [2024-07-14 09:28:26.871887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.674 [2024-07-14 09:28:26.882087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.674 [2024-07-14 09:28:26.882115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.674 [2024-07-14 09:28:26.892471] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.674 [2024-07-14 09:28:26.892499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.674 [2024-07-14 09:28:26.902064] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.674 [2024-07-14 09:28:26.902091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.674 [2024-07-14 09:28:26.912653] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.674 [2024-07-14 09:28:26.912680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.674 [2024-07-14 09:28:26.924615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.674 [2024-07-14 09:28:26.924642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.674 [2024-07-14 09:28:26.933890] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.674 [2024-07-14 09:28:26.933918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.674 [2024-07-14 09:28:26.944713] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.674 [2024-07-14 09:28:26.944741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.674 [2024-07-14 09:28:26.955967] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.674 [2024-07-14 09:28:26.955994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.674 [2024-07-14 09:28:26.965962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.674 [2024-07-14 09:28:26.965989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.674 [2024-07-14 09:28:26.977192] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.674 [2024-07-14 09:28:26.977219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.674 [2024-07-14 09:28:26.987482] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.674 [2024-07-14 09:28:26.987510] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.674 [2024-07-14 09:28:26.997930] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.674 [2024-07-14 09:28:26.997958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.674 [2024-07-14 09:28:27.008410] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.674 [2024-07-14 09:28:27.008437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.674 [2024-07-14 09:28:27.020938] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.674 [2024-07-14 09:28:27.020967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.674 [2024-07-14 09:28:27.030957] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.674 [2024-07-14 09:28:27.030993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.674 [2024-07-14 09:28:27.041765] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.674 [2024-07-14 09:28:27.041793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.674 [2024-07-14 09:28:27.051512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.674 [2024-07-14 09:28:27.051540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.674 [2024-07-14 09:28:27.062317] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.674 [2024-07-14 09:28:27.062345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.674 [2024-07-14 09:28:27.072477] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.674 [2024-07-14 09:28:27.072505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.674 [2024-07-14 09:28:27.083296] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.674 [2024-07-14 09:28:27.083325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.674 [2024-07-14 09:28:27.093778] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.674 [2024-07-14 09:28:27.093807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.674 [2024-07-14 09:28:27.104265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.674 [2024-07-14 09:28:27.104293] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.674 [2024-07-14 09:28:27.114856] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.674 [2024-07-14 09:28:27.114895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.674 [2024-07-14 09:28:27.125558] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.674 [2024-07-14 09:28:27.125586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.933 [2024-07-14 09:28:27.136236] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.933 [2024-07-14 09:28:27.136264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.933 [2024-07-14 09:28:27.146848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.933 [2024-07-14 09:28:27.146887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.933 [2024-07-14 09:28:27.159310] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.933 [2024-07-14 09:28:27.159338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.933 [2024-07-14 09:28:27.168609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.933 [2024-07-14 09:28:27.168638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.933 [2024-07-14 09:28:27.179469] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.933 [2024-07-14 09:28:27.179497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.933 [2024-07-14 09:28:27.190053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.933 [2024-07-14 09:28:27.190082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.933 [2024-07-14 09:28:27.200434] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.933 [2024-07-14 09:28:27.200461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.933 [2024-07-14 09:28:27.210681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.933 [2024-07-14 09:28:27.210709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.933 [2024-07-14 09:28:27.220941] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.933 [2024-07-14 09:28:27.220969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.933 [2024-07-14 09:28:27.231438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.933 [2024-07-14 09:28:27.231474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.933 [2024-07-14 09:28:27.242509] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.933 [2024-07-14 09:28:27.242538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.933 [2024-07-14 09:28:27.252688] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.933 [2024-07-14 09:28:27.252716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.933 [2024-07-14 09:28:27.263468] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.933 [2024-07-14 09:28:27.263497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.933 [2024-07-14 09:28:27.274244] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.933 [2024-07-14 09:28:27.274272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.933 [2024-07-14 09:28:27.283783] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.933 [2024-07-14 09:28:27.283810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.933 [2024-07-14 09:28:27.294918] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.933 [2024-07-14 09:28:27.294945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.933 [2024-07-14 09:28:27.305246] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.933 [2024-07-14 09:28:27.305274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.933 [2024-07-14 09:28:27.315548] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.933 [2024-07-14 09:28:27.315575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.933 [2024-07-14 09:28:27.325558] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.933 [2024-07-14 09:28:27.325585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.933 [2024-07-14 09:28:27.336747] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.933 [2024-07-14 09:28:27.336775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.933 [2024-07-14 09:28:27.346973] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.933 [2024-07-14 09:28:27.347000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.933 [2024-07-14 09:28:27.357363] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.933 [2024-07-14 09:28:27.357392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.933 [2024-07-14 09:28:27.367466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.933 [2024-07-14 09:28:27.367495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.933 [2024-07-14 09:28:27.378251] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.933 [2024-07-14 09:28:27.378280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.191 [2024-07-14 09:28:27.389385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.191 [2024-07-14 09:28:27.389414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.191 [2024-07-14 09:28:27.399978] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.191 [2024-07-14 09:28:27.400007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.191 [2024-07-14 09:28:27.410848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.191 [2024-07-14 09:28:27.410885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.191 [2024-07-14 09:28:27.420892] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.191 [2024-07-14 09:28:27.420919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.191 [2024-07-14 09:28:27.431888] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.191 [2024-07-14 09:28:27.431924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.191 [2024-07-14 09:28:27.442512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.191 [2024-07-14 09:28:27.442540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.191 [2024-07-14 09:28:27.453266] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.191 [2024-07-14 09:28:27.453294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.191 [2024-07-14 09:28:27.464042] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.191 [2024-07-14 09:28:27.464070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.191 [2024-07-14 09:28:27.474550] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.191 [2024-07-14 09:28:27.474578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.191 [2024-07-14 09:28:27.486609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.191 [2024-07-14 09:28:27.486638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.191 [2024-07-14 09:28:27.496286] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.191 [2024-07-14 09:28:27.496314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.191 [2024-07-14 09:28:27.507262] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.191 [2024-07-14 09:28:27.507290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.191 [2024-07-14 09:28:27.516962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.191 [2024-07-14 09:28:27.516991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.191 [2024-07-14 09:28:27.527911] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.191 [2024-07-14 09:28:27.527939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.191 [2024-07-14 09:28:27.537737] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.191 [2024-07-14 09:28:27.537765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.191 [2024-07-14 09:28:27.548752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.191 [2024-07-14 09:28:27.548781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.191 [2024-07-14 09:28:27.561148] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.192 [2024-07-14 09:28:27.561176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.192 [2024-07-14 09:28:27.570412] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.192 [2024-07-14 09:28:27.570440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.192 [2024-07-14 09:28:27.581447] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.192 [2024-07-14 09:28:27.581475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.192 [2024-07-14 09:28:27.591733] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.192 [2024-07-14 09:28:27.591761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.192 [2024-07-14 09:28:27.602520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.192 [2024-07-14 09:28:27.602563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.192 [2024-07-14 09:28:27.613218] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.192 [2024-07-14 09:28:27.613246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.192 [2024-07-14 09:28:27.623816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.192 [2024-07-14 09:28:27.623844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.192 [2024-07-14 09:28:27.634675] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.192 [2024-07-14 09:28:27.634711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.450 [2024-07-14 09:28:27.644885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.450 [2024-07-14 09:28:27.644923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.450 [2024-07-14 09:28:27.656269] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.450 [2024-07-14 09:28:27.656322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.450 [2024-07-14 09:28:27.666836] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.450 [2024-07-14 09:28:27.666873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.450 [2024-07-14 09:28:27.677307] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.450 [2024-07-14 09:28:27.677335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.450 [2024-07-14 09:28:27.688020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.450 [2024-07-14 09:28:27.688048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.450 [2024-07-14 09:28:27.698556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.450 [2024-07-14 09:28:27.698584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.450 [2024-07-14 09:28:27.708779] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.450 [2024-07-14 09:28:27.708808] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.450 [2024-07-14 09:28:27.719463] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.450 [2024-07-14 09:28:27.719491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.450 [2024-07-14 09:28:27.729679] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.450 [2024-07-14 09:28:27.729707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.450 [2024-07-14 09:28:27.740428] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.450 [2024-07-14 09:28:27.740458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.450 [2024-07-14 09:28:27.752509] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.450 [2024-07-14 09:28:27.752537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.450 [2024-07-14 09:28:27.761918] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.450 [2024-07-14 09:28:27.761951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.450 [2024-07-14 09:28:27.773102] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.450 [2024-07-14 09:28:27.773130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.450 [2024-07-14 09:28:27.783432] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.450 [2024-07-14 09:28:27.783461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.450 [2024-07-14 09:28:27.794758] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.450 [2024-07-14 09:28:27.794800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.450 [2024-07-14 09:28:27.805153] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.450 [2024-07-14 09:28:27.805182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.450 [2024-07-14 09:28:27.815704] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.450 [2024-07-14 09:28:27.815732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.450 [2024-07-14 09:28:27.828715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.450 [2024-07-14 09:28:27.828744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.450 [2024-07-14 09:28:27.837990] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.450 [2024-07-14 09:28:27.838026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.450 [2024-07-14 09:28:27.849201] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.450 [2024-07-14 09:28:27.849228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.450 [2024-07-14 09:28:27.859690] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.450 [2024-07-14 09:28:27.859718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.450 [2024-07-14 09:28:27.870083] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.450 [2024-07-14 09:28:27.870111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.450 [2024-07-14 09:28:27.880814] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.450 [2024-07-14 09:28:27.880841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.450 [2024-07-14 09:28:27.891561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.450 [2024-07-14 09:28:27.891589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.709 [2024-07-14 09:28:27.904908] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.709 [2024-07-14 09:28:27.904937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.709 [2024-07-14 09:28:27.915016] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.709 [2024-07-14 09:28:27.915044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.709 [2024-07-14 09:28:27.925955] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.709 [2024-07-14 09:28:27.925984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.709 [2024-07-14 09:28:27.937991] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.709 [2024-07-14 09:28:27.938022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.709 [2024-07-14 09:28:27.947334] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.709 [2024-07-14 09:28:27.947362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.709 [2024-07-14 09:28:27.958180] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.709 [2024-07-14 09:28:27.958209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.709 [2024-07-14 09:28:27.968570] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.709 [2024-07-14 09:28:27.968598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.709 [2024-07-14 09:28:27.978820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.709 [2024-07-14 09:28:27.978849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.709 [2024-07-14 09:28:27.989448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.709 [2024-07-14 09:28:27.989477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.709 [2024-07-14 09:28:28.000099] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.709 [2024-07-14 09:28:28.000127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.709 [2024-07-14 09:28:28.010505] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.709 [2024-07-14 09:28:28.010534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.709 [2024-07-14 09:28:28.021603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.709 [2024-07-14 09:28:28.021631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.709 [2024-07-14 09:28:28.031824] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.709 [2024-07-14 09:28:28.031863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.710 [2024-07-14 09:28:28.043046] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.710 [2024-07-14 09:28:28.043074] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.710 [2024-07-14 09:28:28.053189] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.710 [2024-07-14 09:28:28.053226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.710 [2024-07-14 09:28:28.064451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.710 [2024-07-14 09:28:28.064478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.710 [2024-07-14 09:28:28.075111] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.710 [2024-07-14 09:28:28.075139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.710 [2024-07-14 09:28:28.085710] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.710 [2024-07-14 09:28:28.085737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.710 [2024-07-14 09:28:28.098403] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.710 [2024-07-14 09:28:28.098432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.710 [2024-07-14 09:28:28.107417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.710 [2024-07-14 09:28:28.107445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.710 [2024-07-14 09:28:28.118528] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.710 [2024-07-14 09:28:28.118556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.710 [2024-07-14 09:28:28.129041] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.710 [2024-07-14 09:28:28.129069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.710 [2024-07-14 09:28:28.138483] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.710 [2024-07-14 09:28:28.138511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.710 [2024-07-14 09:28:28.149311] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.710 [2024-07-14 09:28:28.149339] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.710 [2024-07-14 09:28:28.159190] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.710 [2024-07-14 09:28:28.159218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.979 [2024-07-14 09:28:28.170219] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.979 [2024-07-14 09:28:28.170247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.979 [2024-07-14 09:28:28.180427] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.979 [2024-07-14 09:28:28.180455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.979 [2024-07-14 09:28:28.190711] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.979 [2024-07-14 09:28:28.190739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.979 [2024-07-14 09:28:28.201299] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.979 [2024-07-14 09:28:28.201327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.979 [2024-07-14 09:28:28.212110] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.979 [2024-07-14 09:28:28.212137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.979 [2024-07-14 09:28:28.222532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.979 [2024-07-14 09:28:28.222559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.979 [2024-07-14 09:28:28.233262] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.979 [2024-07-14 09:28:28.233289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.979 [2024-07-14 09:28:28.244064] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.979 [2024-07-14 09:28:28.244091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.979 [2024-07-14 09:28:28.254700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.979 [2024-07-14 09:28:28.254727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.979 [2024-07-14 09:28:28.272023] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.979 [2024-07-14 09:28:28.272052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.979 [2024-07-14 09:28:28.281743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.979 [2024-07-14 09:28:28.281770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.979 [2024-07-14 09:28:28.292572] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.979 [2024-07-14 09:28:28.292599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.979 [2024-07-14 09:28:28.305432] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.979 [2024-07-14 09:28:28.305459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.979 [2024-07-14 09:28:28.315008] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.979 [2024-07-14 09:28:28.315035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.979 [2024-07-14 09:28:28.326134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.979 [2024-07-14 09:28:28.326161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.979 [2024-07-14 09:28:28.336128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.979 [2024-07-14 09:28:28.336156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.979 [2024-07-14 09:28:28.347139] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.979 [2024-07-14 09:28:28.347167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.979 [2024-07-14 09:28:28.358022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.979 [2024-07-14 09:28:28.358050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.979 [2024-07-14 09:28:28.368577] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.979 [2024-07-14 09:28:28.368605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.979 [2024-07-14 09:28:28.378225] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.979 [2024-07-14 09:28:28.378252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.979 [2024-07-14 09:28:28.388970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.979 [2024-07-14 09:28:28.388997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.979 [2024-07-14 09:28:28.401508] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.979 [2024-07-14 09:28:28.401536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.979 [2024-07-14 09:28:28.410591] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.979 [2024-07-14 09:28:28.410618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.979 [2024-07-14 09:28:28.421406] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.979 [2024-07-14 09:28:28.421434] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.263 [2024-07-14 09:28:28.431481] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.263 [2024-07-14 09:28:28.431508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.263 [2024-07-14 09:28:28.443029] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.263 [2024-07-14 09:28:28.443059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.263 [2024-07-14 09:28:28.453831] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.263 [2024-07-14 09:28:28.453859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.263 [2024-07-14 09:28:28.463718] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.263 [2024-07-14 09:28:28.463746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.263 [2024-07-14 09:28:28.474735] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.263 [2024-07-14 09:28:28.474762] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.263 [2024-07-14 09:28:28.485320] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.263 [2024-07-14 09:28:28.485348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.263 [2024-07-14 09:28:28.496046] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.263 [2024-07-14 09:28:28.496075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.263 [2024-07-14 09:28:28.508822] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.263 [2024-07-14 09:28:28.508859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.263 [2024-07-14 09:28:28.518501] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.263 [2024-07-14 09:28:28.518528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.263 [2024-07-14 09:28:28.528921] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.263 [2024-07-14 09:28:28.528948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.263 [2024-07-14 09:28:28.539401] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.263 [2024-07-14 09:28:28.539429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.263 [2024-07-14 09:28:28.551668] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.263 [2024-07-14 09:28:28.551696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.263 [2024-07-14 09:28:28.560937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.263 [2024-07-14 09:28:28.560965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.263 [2024-07-14 09:28:28.571860] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.263 [2024-07-14 09:28:28.571894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.263 [2024-07-14 09:28:28.582218] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.263 [2024-07-14 09:28:28.582245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.263 [2024-07-14 09:28:28.592503] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.263 [2024-07-14 09:28:28.592530] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.263 [2024-07-14 09:28:28.602769] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.263 [2024-07-14 09:28:28.602796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.263 [2024-07-14 09:28:28.613350] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.263 [2024-07-14 09:28:28.613378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.263 [2024-07-14 09:28:28.625553] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.263 [2024-07-14 09:28:28.625580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.263 [2024-07-14 09:28:28.635027] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.263 [2024-07-14 09:28:28.635055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.263 [2024-07-14 09:28:28.646172] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.263 [2024-07-14 09:28:28.646208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.263 [2024-07-14 09:28:28.656737] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.263 [2024-07-14 09:28:28.656765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.263 [2024-07-14 09:28:28.667744] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.263 [2024-07-14 09:28:28.667772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.263 [2024-07-14 09:28:28.678554] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.263 [2024-07-14 09:28:28.678582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.263 [2024-07-14 09:28:28.688780] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.263 [2024-07-14 09:28:28.688808] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.263 [2024-07-14 09:28:28.699410] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.263 [2024-07-14 09:28:28.699437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.263 [2024-07-14 09:28:28.709799] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.263 [2024-07-14 09:28:28.709827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.523 [2024-07-14 09:28:28.720636] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.523 [2024-07-14 09:28:28.720665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.523 [2024-07-14 09:28:28.730427] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.523 [2024-07-14 09:28:28.730456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.523 [2024-07-14 09:28:28.741026] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.523 [2024-07-14 09:28:28.741055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.523 [2024-07-14 09:28:28.751557] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.523 [2024-07-14 09:28:28.751585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.523 [2024-07-14 09:28:28.762065] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.523 [2024-07-14 09:28:28.762094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.523 [2024-07-14 09:28:28.774652] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.523 [2024-07-14 09:28:28.774683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.523 [2024-07-14 09:28:28.783789] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.523 [2024-07-14 09:28:28.783818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.523 [2024-07-14 09:28:28.794837] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.523 [2024-07-14 09:28:28.794874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.523 [2024-07-14 09:28:28.805006] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.523 [2024-07-14 09:28:28.805035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.523 [2024-07-14 09:28:28.816581] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.523 [2024-07-14 09:28:28.816610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.523 [2024-07-14 09:28:28.827444] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.523 [2024-07-14 09:28:28.827473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.523 [2024-07-14 09:28:28.839609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.523 [2024-07-14 09:28:28.839637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.523 [2024-07-14 09:28:28.849002] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.523 [2024-07-14 09:28:28.849038] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.523 [2024-07-14 09:28:28.860303] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.523 [2024-07-14 09:28:28.860333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.523 [2024-07-14 09:28:28.871281] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.523 [2024-07-14 09:28:28.871309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.523 [2024-07-14 09:28:28.881982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.523 [2024-07-14 09:28:28.882010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.523 [2024-07-14 09:28:28.892951] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.523 [2024-07-14 09:28:28.892979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.523 [2024-07-14 09:28:28.903975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.523 [2024-07-14 09:28:28.904003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.523 [2024-07-14 09:28:28.914770] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.523 [2024-07-14 09:28:28.914814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.523 [2024-07-14 09:28:28.925943] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.523 [2024-07-14 09:28:28.925971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.523 [2024-07-14 09:28:28.937022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.523 [2024-07-14 09:28:28.937050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.523 [2024-07-14 09:28:28.948306] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.523 [2024-07-14 09:28:28.948335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.523 [2024-07-14 09:28:28.959224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.523 [2024-07-14 09:28:28.959253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.523 [2024-07-14 09:28:28.970069] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.523 [2024-07-14 09:28:28.970098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.783 [2024-07-14 09:28:28.981291] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.783 [2024-07-14 09:28:28.981321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.783 [2024-07-14 09:28:28.991014] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.783 [2024-07-14 09:28:28.991043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.783 [2024-07-14 09:28:29.002082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.783 [2024-07-14 09:28:29.002110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.783 [2024-07-14 09:28:29.012959] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.783 [2024-07-14 09:28:29.012987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.783 [2024-07-14 09:28:29.023470] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.783 [2024-07-14 09:28:29.023498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.783 [2024-07-14 09:28:29.033329] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.783 [2024-07-14 09:28:29.033357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.783 [2024-07-14 09:28:29.044613] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.783 [2024-07-14 09:28:29.044642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.783 [2024-07-14 09:28:29.055505] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.783 [2024-07-14 09:28:29.055541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.783 [2024-07-14 09:28:29.066326] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.783 [2024-07-14 09:28:29.066355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.783 [2024-07-14 09:28:29.076769] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.783 [2024-07-14 09:28:29.076797] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.783 [2024-07-14 09:28:29.087039] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.783 [2024-07-14 09:28:29.087067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.783 [2024-07-14 09:28:29.097485] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.783 [2024-07-14 09:28:29.097514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.783 [2024-07-14 09:28:29.107878] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.783 [2024-07-14 09:28:29.107906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.783 [2024-07-14 09:28:29.118256] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.783 [2024-07-14 09:28:29.118284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.783 [2024-07-14 09:28:29.128641] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.783 [2024-07-14 09:28:29.128668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.783 00:18:44.783 Latency(us) 00:18:44.783 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.783 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:44.783 Nvme1n1 : 5.01 12018.22 93.89 0.00 0.00 10636.13 3932.16 21942.42 00:18:44.783 =================================================================================================================== 00:18:44.783 Total : 12018.22 93.89 0.00 0.00 10636.13 3932.16 21942.42 00:18:44.783 [2024-07-14 09:28:29.134310] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.783 [2024-07-14 09:28:29.134336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.783 [2024-07-14 09:28:29.142379] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.783 [2024-07-14 09:28:29.142407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.783 [2024-07-14 09:28:29.150412] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.783 [2024-07-14 09:28:29.150447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.783 [2024-07-14 09:28:29.158476] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.783 [2024-07-14 09:28:29.158525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.783 [2024-07-14 09:28:29.166491] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.783 [2024-07-14 09:28:29.166542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.783 [2024-07-14 09:28:29.174513] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.783 [2024-07-14 09:28:29.174564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.783 [2024-07-14 09:28:29.182522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.783 [2024-07-14 09:28:29.182574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.783 [2024-07-14 09:28:29.190540] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.783 [2024-07-14 09:28:29.190587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.783 [2024-07-14 09:28:29.198577] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.783 [2024-07-14 09:28:29.198638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.783 [2024-07-14 09:28:29.206592] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.783 [2024-07-14 09:28:29.206640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.783 [2024-07-14 09:28:29.214621] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.783 [2024-07-14 09:28:29.214674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.783 [2024-07-14 09:28:29.222640] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.783 [2024-07-14 09:28:29.222690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.783 [2024-07-14 09:28:29.230665] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.783 [2024-07-14 09:28:29.230718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.043 [2024-07-14 09:28:29.238709] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.043 [2024-07-14 09:28:29.238778] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.043 [2024-07-14 09:28:29.246717] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.043 [2024-07-14 09:28:29.246773] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.043 [2024-07-14 09:28:29.254729] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.043 [2024-07-14 09:28:29.254778] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.043 [2024-07-14 09:28:29.262744] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.043 [2024-07-14 09:28:29.262795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.043 [2024-07-14 09:28:29.270749] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.043 [2024-07-14 09:28:29.270782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.043 [2024-07-14 09:28:29.278765] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.043 [2024-07-14 09:28:29.278795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.043 [2024-07-14 09:28:29.286820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.043 [2024-07-14 09:28:29.286875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.043 [2024-07-14 09:28:29.294853] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.043 [2024-07-14 09:28:29.294931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.043 [2024-07-14 09:28:29.302858] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.043 [2024-07-14 09:28:29.302932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.043 [2024-07-14 09:28:29.310843] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.043 [2024-07-14 09:28:29.310877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.043 [2024-07-14 09:28:29.318912] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.043 [2024-07-14 09:28:29.318948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.043 [2024-07-14 09:28:29.326935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.043 [2024-07-14 09:28:29.326986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.043 [2024-07-14 09:28:29.334957] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.043 [2024-07-14 09:28:29.335006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.043 [2024-07-14 09:28:29.342943] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.043 [2024-07-14 09:28:29.342964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.043 [2024-07-14 09:28:29.350959] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.043 [2024-07-14 09:28:29.350980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.043 [2024-07-14 09:28:29.358974] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.043 [2024-07-14 09:28:29.358996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (735725) - No such process 00:18:45.043 09:28:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 735725 00:18:45.043 09:28:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:45.043 09:28:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.043 09:28:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:45.043 09:28:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.043 09:28:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:45.043 09:28:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.043 09:28:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:45.043 delay0 00:18:45.043 09:28:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.043 09:28:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:45.043 09:28:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.043 09:28:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:45.043 09:28:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.043 09:28:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:45.043 EAL: No free 2048 kB hugepages reported on node 1 00:18:45.302 [2024-07-14 09:28:29.516066] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:51.860 Initializing NVMe Controllers 00:18:51.860 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:51.860 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:51.860 Initialization complete. Launching workers. 00:18:51.860 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 118 00:18:51.860 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 405, failed to submit 33 00:18:51.860 success 192, unsuccess 213, failed 0 00:18:51.860 09:28:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:51.860 09:28:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:18:51.860 09:28:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:51.860 09:28:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:18:51.860 09:28:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:51.860 09:28:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:18:51.860 09:28:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:51.860 09:28:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:51.860 rmmod nvme_tcp 00:18:51.860 rmmod nvme_fabrics 00:18:51.860 rmmod nvme_keyring 00:18:51.860 09:28:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:51.860 09:28:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:18:51.860 09:28:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:18:51.860 09:28:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 734390 ']' 00:18:51.860 09:28:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 734390 00:18:51.860 09:28:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 734390 ']' 00:18:51.860 09:28:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 734390 00:18:51.860 09:28:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:18:51.860 09:28:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:51.860 09:28:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 734390 00:18:51.860 09:28:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:51.860 09:28:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:51.860 09:28:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 734390' 00:18:51.860 killing process with pid 734390 00:18:51.861 09:28:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 734390 00:18:51.861 09:28:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 734390 00:18:51.861 09:28:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:51.861 09:28:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:51.861 09:28:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:51.861 09:28:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:51.861 09:28:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:51.861 09:28:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:51.861 09:28:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:51.861 09:28:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:53.766 09:28:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:53.766 00:18:53.766 real 0m27.606s 00:18:53.766 user 0m40.879s 00:18:53.766 sys 0m8.224s 00:18:53.766 09:28:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:53.766 09:28:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:53.766 ************************************ 00:18:53.766 END TEST nvmf_zcopy 00:18:53.766 ************************************ 00:18:53.766 09:28:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:53.766 09:28:38 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:53.766 09:28:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:53.766 09:28:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:53.766 09:28:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:53.766 ************************************ 00:18:53.766 START TEST nvmf_nmic 00:18:53.766 ************************************ 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:53.766 * Looking for test storage... 00:18:53.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:18:53.766 09:28:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:55.666 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:55.666 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:55.666 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:55.666 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:55.666 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:55.925 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:55.925 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:55.925 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:55.925 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:55.925 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:55.925 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:55.925 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:55.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:55.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:18:55.925 00:18:55.925 --- 10.0.0.2 ping statistics --- 00:18:55.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.925 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:18:55.925 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:55.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:55.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:18:55.925 00:18:55.925 --- 10.0.0.1 ping statistics --- 00:18:55.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.925 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:18:55.925 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:55.925 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:18:55.925 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:55.925 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:55.925 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:55.925 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:55.925 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:55.925 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:55.925 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:55.925 09:28:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:55.925 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:55.925 09:28:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:55.925 09:28:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:55.925 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=739115 00:18:55.925 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:55.925 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 739115 00:18:55.925 09:28:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 739115 ']' 00:18:55.925 09:28:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.925 09:28:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:55.925 09:28:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.925 09:28:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:55.925 09:28:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:55.925 [2024-07-14 09:28:40.335822] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:55.925 [2024-07-14 09:28:40.335929] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:55.925 EAL: No free 2048 kB hugepages reported on node 1 00:18:56.184 [2024-07-14 09:28:40.434977] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:56.184 [2024-07-14 09:28:40.551170] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:56.184 [2024-07-14 09:28:40.551253] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:56.184 [2024-07-14 09:28:40.551285] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:56.184 [2024-07-14 09:28:40.551315] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:56.184 [2024-07-14 09:28:40.551342] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:56.184 [2024-07-14 09:28:40.551421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:56.184 [2024-07-14 09:28:40.551481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:56.184 [2024-07-14 09:28:40.551552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.184 [2024-07-14 09:28:40.551541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:56.442 09:28:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:56.442 09:28:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:18:56.442 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:56.442 09:28:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:56.442 09:28:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:56.442 09:28:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:56.442 09:28:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:56.442 09:28:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.442 09:28:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:56.442 [2024-07-14 09:28:40.707812] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:56.442 09:28:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.442 09:28:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:56.442 09:28:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.442 09:28:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:56.442 Malloc0 00:18:56.442 09:28:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.442 09:28:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:56.442 09:28:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.442 09:28:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:56.442 09:28:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.442 09:28:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:56.442 09:28:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.442 09:28:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:56.442 09:28:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.442 09:28:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:56.442 09:28:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.442 09:28:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:56.442 [2024-07-14 09:28:40.760971] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:56.442 09:28:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.442 09:28:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:56.442 test case1: single bdev can't be used in multiple subsystems 00:18:56.442 09:28:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:56.442 09:28:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.442 09:28:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:56.442 09:28:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.442 09:28:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:56.442 09:28:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.442 09:28:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:56.442 09:28:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.442 09:28:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:18:56.442 09:28:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:56.442 09:28:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.442 09:28:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:56.443 [2024-07-14 09:28:40.784809] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:56.443 [2024-07-14 09:28:40.784861] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:56.443 [2024-07-14 09:28:40.784885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.443 request: 00:18:56.443 { 00:18:56.443 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:56.443 "namespace": { 00:18:56.443 "bdev_name": "Malloc0", 00:18:56.443 "no_auto_visible": false 00:18:56.443 }, 00:18:56.443 "method": "nvmf_subsystem_add_ns", 00:18:56.443 "req_id": 1 00:18:56.443 } 00:18:56.443 Got JSON-RPC error response 00:18:56.443 response: 00:18:56.443 { 00:18:56.443 "code": -32602, 00:18:56.443 "message": "Invalid parameters" 00:18:56.443 } 00:18:56.443 09:28:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:56.443 09:28:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:18:56.443 09:28:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:56.443 09:28:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:56.443 Adding namespace failed - expected result. 00:18:56.443 09:28:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:56.443 test case2: host connect to nvmf target in multiple paths 00:18:56.443 09:28:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:56.443 09:28:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.443 09:28:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:56.443 [2024-07-14 09:28:40.792932] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:56.443 09:28:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.443 09:28:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:57.376 09:28:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:57.942 09:28:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:57.942 09:28:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:18:57.942 09:28:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:57.942 09:28:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:57.942 09:28:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:18:59.855 09:28:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:59.855 09:28:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:59.855 09:28:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:59.855 09:28:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:59.855 09:28:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:59.855 09:28:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:18:59.855 09:28:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:59.855 [global] 00:18:59.855 thread=1 00:18:59.855 invalidate=1 00:18:59.855 rw=write 00:18:59.855 time_based=1 00:18:59.855 runtime=1 00:18:59.855 ioengine=libaio 00:18:59.855 direct=1 00:18:59.855 bs=4096 00:18:59.855 iodepth=1 00:18:59.855 norandommap=0 00:18:59.855 numjobs=1 00:18:59.855 00:18:59.855 verify_dump=1 00:18:59.855 verify_backlog=512 00:18:59.855 verify_state_save=0 00:18:59.855 do_verify=1 00:18:59.855 verify=crc32c-intel 00:18:59.855 [job0] 00:18:59.855 filename=/dev/nvme0n1 00:18:59.855 Could not set queue depth (nvme0n1) 00:19:00.112 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:00.112 fio-3.35 00:19:00.112 Starting 1 thread 00:19:01.044 00:19:01.044 job0: (groupid=0, jobs=1): err= 0: pid=739636: Sun Jul 14 09:28:45 2024 00:19:01.044 read: IOPS=19, BW=79.2KiB/s (81.1kB/s)(80.0KiB/1010msec) 00:19:01.044 slat (nsec): min=15742, max=34684, avg=24432.85, stdev=8144.25 00:19:01.044 clat (usec): min=549, max=42059, avg=39250.05, stdev=9120.31 00:19:01.044 lat (usec): min=572, max=42077, avg=39274.48, stdev=9120.61 00:19:01.044 clat percentiles (usec): 00:19:01.044 | 1.00th=[ 553], 5.00th=[ 553], 10.00th=[40633], 20.00th=[41157], 00:19:01.044 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:01.044 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:01.044 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:01.044 | 99.99th=[42206] 00:19:01.044 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:19:01.044 slat (usec): min=8, max=28722, avg=83.40, stdev=1268.23 00:19:01.044 clat (usec): min=259, max=539, avg=346.51, stdev=50.20 00:19:01.044 lat (usec): min=268, max=29039, avg=429.90, stdev=1268.15 00:19:01.044 clat percentiles (usec): 00:19:01.044 | 1.00th=[ 262], 5.00th=[ 273], 10.00th=[ 285], 20.00th=[ 310], 00:19:01.044 | 30.00th=[ 318], 40.00th=[ 326], 50.00th=[ 330], 60.00th=[ 347], 00:19:01.044 | 70.00th=[ 367], 80.00th=[ 408], 90.00th=[ 429], 95.00th=[ 433], 00:19:01.044 | 99.00th=[ 449], 99.50th=[ 457], 99.90th=[ 537], 99.95th=[ 537], 00:19:01.044 | 99.99th=[ 537] 00:19:01.044 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:19:01.044 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:01.045 lat (usec) : 500=96.05%, 750=0.38% 00:19:01.045 lat (msec) : 50=3.57% 00:19:01.045 cpu : usr=0.99%, sys=1.68%, ctx=535, majf=0, minf=2 00:19:01.045 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:01.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.045 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:01.045 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:01.045 00:19:01.045 Run status group 0 (all jobs): 00:19:01.045 READ: bw=79.2KiB/s (81.1kB/s), 79.2KiB/s-79.2KiB/s (81.1kB/s-81.1kB/s), io=80.0KiB (81.9kB), run=1010-1010msec 00:19:01.045 WRITE: bw=2028KiB/s (2076kB/s), 2028KiB/s-2028KiB/s (2076kB/s-2076kB/s), io=2048KiB (2097kB), run=1010-1010msec 00:19:01.045 00:19:01.045 Disk stats (read/write): 00:19:01.045 nvme0n1: ios=57/512, merge=0/0, ticks=1663/151, in_queue=1814, util=98.60% 00:19:01.045 09:28:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:01.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:19:01.303 09:28:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:01.303 09:28:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:19:01.303 09:28:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:01.303 09:28:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:01.303 09:28:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:01.303 09:28:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:01.303 09:28:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:19:01.303 09:28:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:01.303 09:28:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:19:01.303 09:28:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:01.303 09:28:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:19:01.303 09:28:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:01.303 09:28:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:19:01.303 09:28:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:01.303 09:28:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:01.303 rmmod nvme_tcp 00:19:01.303 rmmod nvme_fabrics 00:19:01.303 rmmod nvme_keyring 00:19:01.303 09:28:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:01.303 09:28:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:19:01.303 09:28:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:19:01.303 09:28:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 739115 ']' 00:19:01.303 09:28:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 739115 00:19:01.303 09:28:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 739115 ']' 00:19:01.303 09:28:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 739115 00:19:01.303 09:28:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:19:01.303 09:28:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:01.303 09:28:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 739115 00:19:01.303 09:28:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:01.303 09:28:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:01.303 09:28:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 739115' 00:19:01.303 killing process with pid 739115 00:19:01.303 09:28:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 739115 00:19:01.303 09:28:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 739115 00:19:01.561 09:28:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:01.561 09:28:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:01.561 09:28:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:01.561 09:28:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:01.561 09:28:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:01.561 09:28:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:01.561 09:28:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:01.561 09:28:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:04.146 09:28:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:04.146 00:19:04.146 real 0m9.970s 00:19:04.146 user 0m22.590s 00:19:04.146 sys 0m2.364s 00:19:04.146 09:28:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:04.146 09:28:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:04.146 ************************************ 00:19:04.146 END TEST nvmf_nmic 00:19:04.146 ************************************ 00:19:04.146 09:28:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:04.146 09:28:48 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:04.146 09:28:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:04.146 09:28:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:04.146 09:28:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:04.146 ************************************ 00:19:04.146 START TEST nvmf_fio_target 00:19:04.146 ************************************ 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:04.146 * Looking for test storage... 00:19:04.146 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:04.146 09:28:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.047 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:06.047 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:06.047 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:06.047 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:06.047 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:06.047 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:06.047 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:06.047 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:06.047 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:06.047 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:19:06.047 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:06.047 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:19:06.047 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:06.047 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:19:06.047 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:06.047 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:06.047 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:06.047 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:06.047 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:06.047 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:06.047 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:06.047 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:06.047 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:06.047 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:06.047 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:06.047 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:06.047 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:06.047 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:06.047 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:06.047 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:06.047 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:06.047 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:06.047 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:06.047 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:06.047 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:06.047 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:06.047 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:06.047 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:06.047 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:06.047 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:06.047 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:06.047 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:06.047 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:06.047 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:06.047 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:06.047 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:06.048 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:06.048 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:06.048 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:06.048 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:19:06.048 00:19:06.048 --- 10.0.0.2 ping statistics --- 00:19:06.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.048 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:06.048 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:06.048 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:19:06.048 00:19:06.048 --- 10.0.0.1 ping statistics --- 00:19:06.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.048 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=741701 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 741701 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 741701 ']' 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:06.048 09:28:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.048 [2024-07-14 09:28:50.349944] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:19:06.048 [2024-07-14 09:28:50.350049] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:06.048 EAL: No free 2048 kB hugepages reported on node 1 00:19:06.048 [2024-07-14 09:28:50.420652] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:06.306 [2024-07-14 09:28:50.517762] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:06.306 [2024-07-14 09:28:50.517816] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:06.306 [2024-07-14 09:28:50.517843] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:06.306 [2024-07-14 09:28:50.517857] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:06.306 [2024-07-14 09:28:50.517877] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:06.306 [2024-07-14 09:28:50.517955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:06.306 [2024-07-14 09:28:50.517983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:06.306 [2024-07-14 09:28:50.518236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:06.306 [2024-07-14 09:28:50.518240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:06.306 09:28:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:06.306 09:28:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:19:06.306 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:06.306 09:28:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:06.306 09:28:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.306 09:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:06.306 09:28:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:06.563 [2024-07-14 09:28:50.943681] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:06.563 09:28:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:06.820 09:28:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:19:06.820 09:28:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:07.077 09:28:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:19:07.077 09:28:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:07.334 09:28:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:19:07.334 09:28:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:07.591 09:28:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:19:07.591 09:28:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:19:07.847 09:28:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:08.105 09:28:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:19:08.105 09:28:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:08.364 09:28:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:19:08.364 09:28:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:08.622 09:28:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:19:08.622 09:28:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:19:08.879 09:28:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:09.136 09:28:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:09.136 09:28:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:09.393 09:28:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:09.393 09:28:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:09.650 09:28:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:09.907 [2024-07-14 09:28:54.265906] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:09.907 09:28:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:10.164 09:28:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:10.422 09:28:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:11.353 09:28:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:11.353 09:28:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:19:11.353 09:28:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:11.353 09:28:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:19:11.353 09:28:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:19:11.353 09:28:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:19:13.248 09:28:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:13.248 09:28:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:13.248 09:28:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:13.248 09:28:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:19:13.248 09:28:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:13.248 09:28:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:19:13.248 09:28:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:13.248 [global] 00:19:13.248 thread=1 00:19:13.248 invalidate=1 00:19:13.248 rw=write 00:19:13.248 time_based=1 00:19:13.248 runtime=1 00:19:13.248 ioengine=libaio 00:19:13.248 direct=1 00:19:13.248 bs=4096 00:19:13.248 iodepth=1 00:19:13.248 norandommap=0 00:19:13.248 numjobs=1 00:19:13.248 00:19:13.248 verify_dump=1 00:19:13.248 verify_backlog=512 00:19:13.248 verify_state_save=0 00:19:13.248 do_verify=1 00:19:13.248 verify=crc32c-intel 00:19:13.248 [job0] 00:19:13.248 filename=/dev/nvme0n1 00:19:13.248 [job1] 00:19:13.248 filename=/dev/nvme0n2 00:19:13.248 [job2] 00:19:13.248 filename=/dev/nvme0n3 00:19:13.248 [job3] 00:19:13.248 filename=/dev/nvme0n4 00:19:13.248 Could not set queue depth (nvme0n1) 00:19:13.248 Could not set queue depth (nvme0n2) 00:19:13.248 Could not set queue depth (nvme0n3) 00:19:13.248 Could not set queue depth (nvme0n4) 00:19:13.248 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:13.248 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:13.248 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:13.248 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:13.248 fio-3.35 00:19:13.248 Starting 4 threads 00:19:14.619 00:19:14.619 job0: (groupid=0, jobs=1): err= 0: pid=742767: Sun Jul 14 09:28:58 2024 00:19:14.619 read: IOPS=386, BW=1546KiB/s (1583kB/s)(1608KiB/1040msec) 00:19:14.619 slat (nsec): min=7123, max=35032, avg=15600.05, stdev=5357.74 00:19:14.619 clat (usec): min=492, max=41068, avg=2071.59, stdev=7666.67 00:19:14.619 lat (usec): min=500, max=41102, avg=2087.19, stdev=7668.59 00:19:14.619 clat percentiles (usec): 00:19:14.619 | 1.00th=[ 502], 5.00th=[ 519], 10.00th=[ 529], 20.00th=[ 537], 00:19:14.619 | 30.00th=[ 545], 40.00th=[ 545], 50.00th=[ 553], 60.00th=[ 562], 00:19:14.619 | 70.00th=[ 570], 80.00th=[ 578], 90.00th=[ 594], 95.00th=[ 644], 00:19:14.619 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:14.619 | 99.99th=[41157] 00:19:14.619 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:19:14.619 slat (nsec): min=8439, max=66970, avg=22332.05, stdev=11190.68 00:19:14.619 clat (usec): min=227, max=606, avg=359.24, stdev=69.44 00:19:14.619 lat (usec): min=237, max=617, avg=381.57, stdev=72.04 00:19:14.619 clat percentiles (usec): 00:19:14.619 | 1.00th=[ 253], 5.00th=[ 265], 10.00th=[ 281], 20.00th=[ 297], 00:19:14.619 | 30.00th=[ 310], 40.00th=[ 326], 50.00th=[ 347], 60.00th=[ 371], 00:19:14.619 | 70.00th=[ 400], 80.00th=[ 429], 90.00th=[ 457], 95.00th=[ 482], 00:19:14.619 | 99.00th=[ 523], 99.50th=[ 562], 99.90th=[ 611], 99.95th=[ 611], 00:19:14.619 | 99.99th=[ 611] 00:19:14.619 bw ( KiB/s): min= 4096, max= 4096, per=26.60%, avg=4096.00, stdev= 0.00, samples=1 00:19:14.619 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:14.619 lat (usec) : 250=0.44%, 500=54.27%, 750=43.54% 00:19:14.619 lat (msec) : 4=0.11%, 50=1.64% 00:19:14.619 cpu : usr=1.73%, sys=1.73%, ctx=914, majf=0, minf=2 00:19:14.619 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:14.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.619 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.619 issued rwts: total=402,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.619 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:14.619 job1: (groupid=0, jobs=1): err= 0: pid=742768: Sun Jul 14 09:28:58 2024 00:19:14.619 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:19:14.619 slat (nsec): min=7599, max=69617, avg=22907.44, stdev=8814.30 00:19:14.619 clat (usec): min=436, max=688, avg=514.89, stdev=42.32 00:19:14.619 lat (usec): min=453, max=720, avg=537.80, stdev=45.86 00:19:14.619 clat percentiles (usec): 00:19:14.620 | 1.00th=[ 453], 5.00th=[ 461], 10.00th=[ 469], 20.00th=[ 474], 00:19:14.620 | 30.00th=[ 482], 40.00th=[ 494], 50.00th=[ 510], 60.00th=[ 529], 00:19:14.620 | 70.00th=[ 537], 80.00th=[ 553], 90.00th=[ 570], 95.00th=[ 594], 00:19:14.620 | 99.00th=[ 635], 99.50th=[ 635], 99.90th=[ 676], 99.95th=[ 693], 00:19:14.620 | 99.99th=[ 693] 00:19:14.620 write: IOPS=1442, BW=5770KiB/s (5909kB/s)(5776KiB/1001msec); 0 zone resets 00:19:14.620 slat (nsec): min=6665, max=61660, avg=19083.78, stdev=9553.30 00:19:14.620 clat (usec): min=200, max=577, avg=282.12, stdev=65.92 00:19:14.620 lat (usec): min=213, max=586, avg=301.20, stdev=68.68 00:19:14.620 clat percentiles (usec): 00:19:14.620 | 1.00th=[ 212], 5.00th=[ 217], 10.00th=[ 221], 20.00th=[ 231], 00:19:14.620 | 30.00th=[ 239], 40.00th=[ 251], 50.00th=[ 262], 60.00th=[ 273], 00:19:14.620 | 70.00th=[ 285], 80.00th=[ 330], 90.00th=[ 396], 95.00th=[ 416], 00:19:14.620 | 99.00th=[ 490], 99.50th=[ 515], 99.90th=[ 570], 99.95th=[ 578], 00:19:14.620 | 99.99th=[ 578] 00:19:14.620 bw ( KiB/s): min= 5200, max= 5200, per=33.77%, avg=5200.00, stdev= 0.00, samples=1 00:19:14.620 iops : min= 1300, max= 1300, avg=1300.00, stdev= 0.00, samples=1 00:19:14.620 lat (usec) : 250=22.81%, 500=53.24%, 750=23.95% 00:19:14.620 cpu : usr=2.90%, sys=6.10%, ctx=2469, majf=0, minf=1 00:19:14.620 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:14.620 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.620 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.620 issued rwts: total=1024,1444,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.620 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:14.620 job2: (groupid=0, jobs=1): err= 0: pid=742769: Sun Jul 14 09:28:58 2024 00:19:14.620 read: IOPS=20, BW=83.0KiB/s (85.0kB/s)(84.0KiB/1012msec) 00:19:14.620 slat (nsec): min=15816, max=35039, avg=22515.19, stdev=8478.69 00:19:14.620 clat (usec): min=40940, max=41938, avg=41041.49, stdev=224.04 00:19:14.620 lat (usec): min=40956, max=41955, avg=41064.01, stdev=222.64 00:19:14.620 clat percentiles (usec): 00:19:14.620 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:19:14.620 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:14.620 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:14.620 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:19:14.620 | 99.99th=[41681] 00:19:14.620 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:19:14.620 slat (nsec): min=7427, max=53385, avg=16017.35, stdev=7108.28 00:19:14.620 clat (usec): min=222, max=406, avg=271.18, stdev=25.01 00:19:14.620 lat (usec): min=231, max=428, avg=287.19, stdev=28.23 00:19:14.620 clat percentiles (usec): 00:19:14.620 | 1.00th=[ 231], 5.00th=[ 237], 10.00th=[ 243], 20.00th=[ 253], 00:19:14.620 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 273], 00:19:14.620 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 306], 95.00th=[ 314], 00:19:14.620 | 99.00th=[ 347], 99.50th=[ 396], 99.90th=[ 408], 99.95th=[ 408], 00:19:14.620 | 99.99th=[ 408] 00:19:14.620 bw ( KiB/s): min= 4096, max= 4096, per=26.60%, avg=4096.00, stdev= 0.00, samples=1 00:19:14.620 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:14.620 lat (usec) : 250=16.89%, 500=79.17% 00:19:14.620 lat (msec) : 50=3.94% 00:19:14.620 cpu : usr=0.69%, sys=0.99%, ctx=533, majf=0, minf=1 00:19:14.620 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:14.620 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.620 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.620 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.620 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:14.620 job3: (groupid=0, jobs=1): err= 0: pid=742770: Sun Jul 14 09:28:58 2024 00:19:14.620 read: IOPS=1125, BW=4503KiB/s (4612kB/s)(4508KiB/1001msec) 00:19:14.620 slat (nsec): min=6147, max=64466, avg=25502.20, stdev=9593.24 00:19:14.620 clat (usec): min=334, max=579, avg=441.25, stdev=43.75 00:19:14.620 lat (usec): min=344, max=596, avg=466.75, stdev=47.27 00:19:14.620 clat percentiles (usec): 00:19:14.620 | 1.00th=[ 343], 5.00th=[ 355], 10.00th=[ 375], 20.00th=[ 404], 00:19:14.620 | 30.00th=[ 420], 40.00th=[ 437], 50.00th=[ 449], 60.00th=[ 461], 00:19:14.620 | 70.00th=[ 469], 80.00th=[ 478], 90.00th=[ 490], 95.00th=[ 498], 00:19:14.620 | 99.00th=[ 523], 99.50th=[ 562], 99.90th=[ 570], 99.95th=[ 578], 00:19:14.620 | 99.99th=[ 578] 00:19:14.620 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:19:14.620 slat (nsec): min=7303, max=71144, avg=19552.15, stdev=9479.97 00:19:14.620 clat (usec): min=210, max=599, avg=278.74, stdev=60.15 00:19:14.620 lat (usec): min=219, max=624, avg=298.29, stdev=63.00 00:19:14.620 clat percentiles (usec): 00:19:14.620 | 1.00th=[ 217], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 231], 00:19:14.620 | 30.00th=[ 237], 40.00th=[ 243], 50.00th=[ 258], 60.00th=[ 277], 00:19:14.620 | 70.00th=[ 297], 80.00th=[ 326], 90.00th=[ 371], 95.00th=[ 400], 00:19:14.620 | 99.00th=[ 461], 99.50th=[ 498], 99.90th=[ 570], 99.95th=[ 603], 00:19:14.620 | 99.99th=[ 603] 00:19:14.620 bw ( KiB/s): min= 5928, max= 5928, per=38.49%, avg=5928.00, stdev= 0.00, samples=1 00:19:14.620 iops : min= 1482, max= 1482, avg=1482.00, stdev= 0.00, samples=1 00:19:14.620 lat (usec) : 250=27.41%, 500=70.41%, 750=2.18% 00:19:14.620 cpu : usr=3.20%, sys=5.90%, ctx=2666, majf=0, minf=1 00:19:14.620 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:14.620 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.620 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.620 issued rwts: total=1127,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.620 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:14.620 00:19:14.620 Run status group 0 (all jobs): 00:19:14.620 READ: bw=9900KiB/s (10.1MB/s), 83.0KiB/s-4503KiB/s (85.0kB/s-4612kB/s), io=10.1MiB (10.5MB), run=1001-1040msec 00:19:14.620 WRITE: bw=15.0MiB/s (15.8MB/s), 1969KiB/s-6138KiB/s (2016kB/s-6285kB/s), io=15.6MiB (16.4MB), run=1001-1040msec 00:19:14.620 00:19:14.620 Disk stats (read/write): 00:19:14.620 nvme0n1: ios=447/512, merge=0/0, ticks=882/175, in_queue=1057, util=91.58% 00:19:14.620 nvme0n2: ios=1013/1024, merge=0/0, ticks=532/272, in_queue=804, util=87.07% 00:19:14.620 nvme0n3: ios=16/512, merge=0/0, ticks=658/133, in_queue=791, util=88.88% 00:19:14.620 nvme0n4: ios=1048/1169, merge=0/0, ticks=1338/318, in_queue=1656, util=97.67% 00:19:14.620 09:28:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:14.620 [global] 00:19:14.620 thread=1 00:19:14.620 invalidate=1 00:19:14.620 rw=randwrite 00:19:14.620 time_based=1 00:19:14.620 runtime=1 00:19:14.620 ioengine=libaio 00:19:14.620 direct=1 00:19:14.620 bs=4096 00:19:14.620 iodepth=1 00:19:14.620 norandommap=0 00:19:14.620 numjobs=1 00:19:14.620 00:19:14.620 verify_dump=1 00:19:14.620 verify_backlog=512 00:19:14.620 verify_state_save=0 00:19:14.620 do_verify=1 00:19:14.620 verify=crc32c-intel 00:19:14.620 [job0] 00:19:14.620 filename=/dev/nvme0n1 00:19:14.620 [job1] 00:19:14.620 filename=/dev/nvme0n2 00:19:14.620 [job2] 00:19:14.620 filename=/dev/nvme0n3 00:19:14.620 [job3] 00:19:14.620 filename=/dev/nvme0n4 00:19:14.620 Could not set queue depth (nvme0n1) 00:19:14.620 Could not set queue depth (nvme0n2) 00:19:14.620 Could not set queue depth (nvme0n3) 00:19:14.620 Could not set queue depth (nvme0n4) 00:19:14.879 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:14.879 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:14.879 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:14.879 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:14.879 fio-3.35 00:19:14.879 Starting 4 threads 00:19:16.253 00:19:16.253 job0: (groupid=0, jobs=1): err= 0: pid=743001: Sun Jul 14 09:29:00 2024 00:19:16.253 read: IOPS=25, BW=102KiB/s (104kB/s)(104KiB/1024msec) 00:19:16.253 slat (nsec): min=17536, max=50012, avg=33021.19, stdev=7242.38 00:19:16.253 clat (usec): min=439, max=41282, avg=29163.57, stdev=18293.24 00:19:16.253 lat (usec): min=461, max=41316, avg=29196.59, stdev=18292.58 00:19:16.253 clat percentiles (usec): 00:19:16.253 | 1.00th=[ 441], 5.00th=[ 510], 10.00th=[ 553], 20.00th=[ 816], 00:19:16.253 | 30.00th=[15401], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:19:16.253 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:16.253 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:16.253 | 99.99th=[41157] 00:19:16.253 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:19:16.253 slat (nsec): min=8372, max=67705, avg=19528.71, stdev=8410.96 00:19:16.253 clat (usec): min=226, max=822, avg=492.23, stdev=93.89 00:19:16.253 lat (usec): min=235, max=852, avg=511.76, stdev=94.61 00:19:16.253 clat percentiles (usec): 00:19:16.253 | 1.00th=[ 253], 5.00th=[ 338], 10.00th=[ 396], 20.00th=[ 433], 00:19:16.253 | 30.00th=[ 457], 40.00th=[ 474], 50.00th=[ 490], 60.00th=[ 502], 00:19:16.253 | 70.00th=[ 523], 80.00th=[ 553], 90.00th=[ 603], 95.00th=[ 676], 00:19:16.253 | 99.00th=[ 750], 99.50th=[ 807], 99.90th=[ 824], 99.95th=[ 824], 00:19:16.253 | 99.99th=[ 824] 00:19:16.253 bw ( KiB/s): min= 4096, max= 4096, per=34.13%, avg=4096.00, stdev= 0.00, samples=1 00:19:16.253 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:16.253 lat (usec) : 250=0.74%, 500=55.39%, 750=38.85%, 1000=1.30% 00:19:16.253 lat (msec) : 2=0.19%, 20=0.19%, 50=3.35% 00:19:16.253 cpu : usr=0.78%, sys=1.17%, ctx=539, majf=0, minf=2 00:19:16.253 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:16.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.253 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.253 issued rwts: total=26,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:16.253 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:16.253 job1: (groupid=0, jobs=1): err= 0: pid=743002: Sun Jul 14 09:29:00 2024 00:19:16.253 read: IOPS=740, BW=2962KiB/s (3033kB/s)(3024KiB/1021msec) 00:19:16.253 slat (nsec): min=6286, max=63148, avg=27274.86, stdev=10043.63 00:19:16.253 clat (usec): min=353, max=41033, avg=932.42, stdev=4146.82 00:19:16.253 lat (usec): min=368, max=41042, avg=959.69, stdev=4145.34 00:19:16.253 clat percentiles (usec): 00:19:16.253 | 1.00th=[ 363], 5.00th=[ 379], 10.00th=[ 420], 20.00th=[ 441], 00:19:16.253 | 30.00th=[ 474], 40.00th=[ 494], 50.00th=[ 506], 60.00th=[ 519], 00:19:16.253 | 70.00th=[ 529], 80.00th=[ 545], 90.00th=[ 578], 95.00th=[ 627], 00:19:16.253 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:16.253 | 99.99th=[41157] 00:19:16.253 write: IOPS=1002, BW=4012KiB/s (4108kB/s)(4096KiB/1021msec); 0 zone resets 00:19:16.253 slat (nsec): min=6810, max=56124, avg=15003.92, stdev=7086.15 00:19:16.253 clat (usec): min=214, max=626, avg=263.27, stdev=54.96 00:19:16.253 lat (usec): min=221, max=635, avg=278.27, stdev=55.73 00:19:16.253 clat percentiles (usec): 00:19:16.253 | 1.00th=[ 223], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 237], 00:19:16.253 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 249], 00:19:16.253 | 70.00th=[ 255], 80.00th=[ 262], 90.00th=[ 306], 95.00th=[ 404], 00:19:16.253 | 99.00th=[ 490], 99.50th=[ 502], 99.90th=[ 562], 99.95th=[ 627], 00:19:16.253 | 99.99th=[ 627] 00:19:16.253 bw ( KiB/s): min= 2248, max= 5944, per=34.13%, avg=4096.00, stdev=2613.47, samples=2 00:19:16.254 iops : min= 562, max= 1486, avg=1024.00, stdev=653.37, samples=2 00:19:16.254 lat (usec) : 250=35.34%, 500=41.01%, 750=22.64%, 1000=0.39% 00:19:16.254 lat (msec) : 2=0.17%, 50=0.45% 00:19:16.254 cpu : usr=1.47%, sys=4.02%, ctx=1781, majf=0, minf=1 00:19:16.254 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:16.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.254 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.254 issued rwts: total=756,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:16.254 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:16.254 job2: (groupid=0, jobs=1): err= 0: pid=743003: Sun Jul 14 09:29:00 2024 00:19:16.254 read: IOPS=30, BW=123KiB/s (126kB/s)(124KiB/1010msec) 00:19:16.254 slat (nsec): min=7503, max=58272, avg=25351.90, stdev=11564.39 00:19:16.254 clat (usec): min=417, max=43011, avg=27599.48, stdev=19085.12 00:19:16.254 lat (usec): min=425, max=43030, avg=27624.83, stdev=19085.85 00:19:16.254 clat percentiles (usec): 00:19:16.254 | 1.00th=[ 416], 5.00th=[ 519], 10.00th=[ 537], 20.00th=[ 644], 00:19:16.254 | 30.00th=[ 988], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:19:16.254 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:19:16.254 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:19:16.254 | 99.99th=[43254] 00:19:16.254 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:19:16.254 slat (nsec): min=7692, max=38387, avg=11251.11, stdev=4734.60 00:19:16.254 clat (usec): min=220, max=653, avg=285.01, stdev=60.73 00:19:16.254 lat (usec): min=231, max=662, avg=296.26, stdev=61.58 00:19:16.254 clat percentiles (usec): 00:19:16.254 | 1.00th=[ 229], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 245], 00:19:16.254 | 30.00th=[ 249], 40.00th=[ 255], 50.00th=[ 265], 60.00th=[ 277], 00:19:16.254 | 70.00th=[ 293], 80.00th=[ 318], 90.00th=[ 355], 95.00th=[ 400], 00:19:16.254 | 99.00th=[ 519], 99.50th=[ 570], 99.90th=[ 652], 99.95th=[ 652], 00:19:16.254 | 99.99th=[ 652] 00:19:16.254 bw ( KiB/s): min= 4096, max= 4096, per=34.13%, avg=4096.00, stdev= 0.00, samples=1 00:19:16.254 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:16.254 lat (usec) : 250=29.47%, 500=63.35%, 750=2.95%, 1000=0.37% 00:19:16.254 lat (msec) : 50=3.87% 00:19:16.254 cpu : usr=0.30%, sys=0.89%, ctx=544, majf=0, minf=1 00:19:16.254 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:16.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.254 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.254 issued rwts: total=31,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:16.254 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:16.254 job3: (groupid=0, jobs=1): err= 0: pid=743004: Sun Jul 14 09:29:00 2024 00:19:16.254 read: IOPS=579, BW=2316KiB/s (2372kB/s)(2372KiB/1024msec) 00:19:16.254 slat (nsec): min=6407, max=66159, avg=19039.33, stdev=6864.42 00:19:16.254 clat (usec): min=457, max=41128, avg=1085.10, stdev=4670.81 00:19:16.254 lat (usec): min=466, max=41154, avg=1104.13, stdev=4671.94 00:19:16.254 clat percentiles (usec): 00:19:16.254 | 1.00th=[ 469], 5.00th=[ 486], 10.00th=[ 494], 20.00th=[ 506], 00:19:16.254 | 30.00th=[ 515], 40.00th=[ 529], 50.00th=[ 529], 60.00th=[ 545], 00:19:16.254 | 70.00th=[ 553], 80.00th=[ 562], 90.00th=[ 578], 95.00th=[ 603], 00:19:16.254 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:16.254 | 99.99th=[41157] 00:19:16.254 write: IOPS=1000, BW=4000KiB/s (4096kB/s)(4096KiB/1024msec); 0 zone resets 00:19:16.254 slat (nsec): min=7793, max=65433, avg=22276.08, stdev=11932.25 00:19:16.254 clat (usec): min=224, max=761, avg=329.19, stdev=69.97 00:19:16.254 lat (usec): min=233, max=771, avg=351.47, stdev=72.77 00:19:16.254 clat percentiles (usec): 00:19:16.254 | 1.00th=[ 233], 5.00th=[ 241], 10.00th=[ 249], 20.00th=[ 269], 00:19:16.254 | 30.00th=[ 285], 40.00th=[ 293], 50.00th=[ 314], 60.00th=[ 334], 00:19:16.254 | 70.00th=[ 371], 80.00th=[ 396], 90.00th=[ 416], 95.00th=[ 445], 00:19:16.254 | 99.00th=[ 515], 99.50th=[ 562], 99.90th=[ 701], 99.95th=[ 758], 00:19:16.254 | 99.99th=[ 758] 00:19:16.254 bw ( KiB/s): min= 3200, max= 4992, per=34.13%, avg=4096.00, stdev=1267.14, samples=2 00:19:16.254 iops : min= 800, max= 1248, avg=1024.00, stdev=316.78, samples=2 00:19:16.254 lat (usec) : 250=6.93%, 500=61.16%, 750=31.23%, 1000=0.06% 00:19:16.254 lat (msec) : 2=0.06%, 4=0.06%, 50=0.49% 00:19:16.254 cpu : usr=2.54%, sys=4.11%, ctx=1618, majf=0, minf=1 00:19:16.254 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:16.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.254 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.254 issued rwts: total=593,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:16.254 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:16.254 00:19:16.254 Run status group 0 (all jobs): 00:19:16.254 READ: bw=5492KiB/s (5624kB/s), 102KiB/s-2962KiB/s (104kB/s-3033kB/s), io=5624KiB (5759kB), run=1010-1024msec 00:19:16.254 WRITE: bw=11.7MiB/s (12.3MB/s), 2000KiB/s-4012KiB/s (2048kB/s-4108kB/s), io=12.0MiB (12.6MB), run=1010-1024msec 00:19:16.254 00:19:16.254 Disk stats (read/write): 00:19:16.254 nvme0n1: ios=64/512, merge=0/0, ticks=813/235, in_queue=1048, util=98.50% 00:19:16.254 nvme0n2: ios=775/1024, merge=0/0, ticks=1337/262, in_queue=1599, util=90.64% 00:19:16.254 nvme0n3: ios=50/512, merge=0/0, ticks=1594/137, in_queue=1731, util=93.30% 00:19:16.254 nvme0n4: ios=645/1024, merge=0/0, ticks=819/330, in_queue=1149, util=97.57% 00:19:16.254 09:29:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:16.254 [global] 00:19:16.254 thread=1 00:19:16.254 invalidate=1 00:19:16.254 rw=write 00:19:16.254 time_based=1 00:19:16.254 runtime=1 00:19:16.254 ioengine=libaio 00:19:16.254 direct=1 00:19:16.254 bs=4096 00:19:16.254 iodepth=128 00:19:16.254 norandommap=0 00:19:16.254 numjobs=1 00:19:16.254 00:19:16.254 verify_dump=1 00:19:16.254 verify_backlog=512 00:19:16.254 verify_state_save=0 00:19:16.254 do_verify=1 00:19:16.254 verify=crc32c-intel 00:19:16.254 [job0] 00:19:16.254 filename=/dev/nvme0n1 00:19:16.254 [job1] 00:19:16.254 filename=/dev/nvme0n2 00:19:16.254 [job2] 00:19:16.254 filename=/dev/nvme0n3 00:19:16.254 [job3] 00:19:16.254 filename=/dev/nvme0n4 00:19:16.254 Could not set queue depth (nvme0n1) 00:19:16.254 Could not set queue depth (nvme0n2) 00:19:16.254 Could not set queue depth (nvme0n3) 00:19:16.254 Could not set queue depth (nvme0n4) 00:19:16.254 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:16.254 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:16.254 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:16.254 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:16.254 fio-3.35 00:19:16.254 Starting 4 threads 00:19:17.630 00:19:17.630 job0: (groupid=0, jobs=1): err= 0: pid=743332: Sun Jul 14 09:29:01 2024 00:19:17.630 read: IOPS=3445, BW=13.5MiB/s (14.1MB/s)(13.6MiB/1009msec) 00:19:17.630 slat (usec): min=3, max=16909, avg=131.44, stdev=883.10 00:19:17.630 clat (usec): min=2766, max=51566, avg=18388.45, stdev=7608.44 00:19:17.630 lat (usec): min=6746, max=52123, avg=18519.89, stdev=7656.05 00:19:17.630 clat percentiles (usec): 00:19:17.630 | 1.00th=[ 7439], 5.00th=[10814], 10.00th=[11207], 20.00th=[12911], 00:19:17.630 | 30.00th=[13698], 40.00th=[14746], 50.00th=[16450], 60.00th=[17433], 00:19:17.630 | 70.00th=[20317], 80.00th=[24511], 90.00th=[29754], 95.00th=[32900], 00:19:17.630 | 99.00th=[47449], 99.50th=[51119], 99.90th=[51643], 99.95th=[51643], 00:19:17.630 | 99.99th=[51643] 00:19:17.630 write: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec); 0 zone resets 00:19:17.630 slat (usec): min=3, max=25880, avg=142.21, stdev=1009.83 00:19:17.630 clat (usec): min=2176, max=91756, avg=17866.71, stdev=12974.40 00:19:17.630 lat (usec): min=2200, max=91764, avg=18008.92, stdev=13033.33 00:19:17.630 clat percentiles (usec): 00:19:17.630 | 1.00th=[ 4359], 5.00th=[ 7570], 10.00th=[ 8455], 20.00th=[10683], 00:19:17.630 | 30.00th=[11863], 40.00th=[13304], 50.00th=[14484], 60.00th=[15795], 00:19:17.630 | 70.00th=[17433], 80.00th=[21890], 90.00th=[28443], 95.00th=[39060], 00:19:17.630 | 99.00th=[86508], 99.50th=[88605], 99.90th=[91751], 99.95th=[91751], 00:19:17.630 | 99.99th=[91751] 00:19:17.630 bw ( KiB/s): min=12320, max=16352, per=24.88%, avg=14336.00, stdev=2851.05, samples=2 00:19:17.630 iops : min= 3080, max= 4088, avg=3584.00, stdev=712.76, samples=2 00:19:17.630 lat (msec) : 4=0.14%, 10=10.51%, 20=60.78%, 50=26.50%, 100=2.07% 00:19:17.630 cpu : usr=5.46%, sys=6.94%, ctx=267, majf=0, minf=13 00:19:17.630 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:19:17.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:17.630 issued rwts: total=3477,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.630 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:17.630 job1: (groupid=0, jobs=1): err= 0: pid=743353: Sun Jul 14 09:29:01 2024 00:19:17.630 read: IOPS=2019, BW=8079KiB/s (8273kB/s)(8192KiB/1014msec) 00:19:17.630 slat (usec): min=4, max=15492, avg=175.43, stdev=1042.86 00:19:17.630 clat (usec): min=7228, max=70473, avg=19160.17, stdev=11073.62 00:19:17.630 lat (usec): min=7243, max=70490, avg=19335.60, stdev=11188.03 00:19:17.630 clat percentiles (usec): 00:19:17.630 | 1.00th=[10028], 5.00th=[12387], 10.00th=[12780], 20.00th=[13304], 00:19:17.630 | 30.00th=[13566], 40.00th=[13960], 50.00th=[15401], 60.00th=[16909], 00:19:17.630 | 70.00th=[17695], 80.00th=[20317], 90.00th=[32375], 95.00th=[45876], 00:19:17.630 | 99.00th=[63177], 99.50th=[69731], 99.90th=[70779], 99.95th=[70779], 00:19:17.630 | 99.99th=[70779] 00:19:17.630 write: IOPS=2325, BW=9302KiB/s (9525kB/s)(9432KiB/1014msec); 0 zone resets 00:19:17.630 slat (usec): min=5, max=41075, avg=258.39, stdev=1397.20 00:19:17.630 clat (usec): min=4517, max=87929, avg=34344.14, stdev=17575.81 00:19:17.630 lat (usec): min=4543, max=87937, avg=34602.53, stdev=17713.26 00:19:17.630 clat percentiles (usec): 00:19:17.630 | 1.00th=[ 6063], 5.00th=[ 9503], 10.00th=[12518], 20.00th=[18220], 00:19:17.630 | 30.00th=[21890], 40.00th=[25035], 50.00th=[30540], 60.00th=[36963], 00:19:17.630 | 70.00th=[45876], 80.00th=[51643], 90.00th=[60031], 95.00th=[65274], 00:19:17.630 | 99.00th=[72877], 99.50th=[79168], 99.90th=[87557], 99.95th=[87557], 00:19:17.630 | 99.99th=[87557] 00:19:17.630 bw ( KiB/s): min= 7368, max=10480, per=15.49%, avg=8924.00, stdev=2200.52, samples=2 00:19:17.630 iops : min= 1842, max= 2620, avg=2231.00, stdev=550.13, samples=2 00:19:17.630 lat (msec) : 10=3.84%, 20=46.01%, 50=35.59%, 100=14.57% 00:19:17.630 cpu : usr=4.54%, sys=5.73%, ctx=274, majf=0, minf=15 00:19:17.630 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:19:17.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:17.630 issued rwts: total=2048,2358,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.630 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:17.630 job2: (groupid=0, jobs=1): err= 0: pid=743354: Sun Jul 14 09:29:01 2024 00:19:17.630 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:19:17.630 slat (usec): min=3, max=58866, avg=125.36, stdev=1104.48 00:19:17.630 clat (usec): min=8113, max=75792, avg=15300.09, stdev=6367.27 00:19:17.630 lat (usec): min=8120, max=75810, avg=15425.46, stdev=6443.15 00:19:17.630 clat percentiles (usec): 00:19:17.630 | 1.00th=[10421], 5.00th=[11076], 10.00th=[11207], 20.00th=[11731], 00:19:17.630 | 30.00th=[13566], 40.00th=[14353], 50.00th=[15008], 60.00th=[15401], 00:19:17.630 | 70.00th=[16057], 80.00th=[17171], 90.00th=[17957], 95.00th=[19006], 00:19:17.630 | 99.00th=[71828], 99.50th=[71828], 99.90th=[73925], 99.95th=[73925], 00:19:17.630 | 99.99th=[76022] 00:19:17.630 write: IOPS=4392, BW=17.2MiB/s (18.0MB/s)(17.2MiB/1004msec); 0 zone resets 00:19:17.630 slat (usec): min=4, max=9551, avg=99.31, stdev=458.52 00:19:17.630 clat (usec): min=445, max=74382, avg=14480.39, stdev=8151.03 00:19:17.630 lat (usec): min=5218, max=74410, avg=14579.71, stdev=8152.81 00:19:17.630 clat percentiles (usec): 00:19:17.630 | 1.00th=[ 5538], 5.00th=[ 9634], 10.00th=[10552], 20.00th=[11207], 00:19:17.630 | 30.00th=[12125], 40.00th=[13173], 50.00th=[13698], 60.00th=[14353], 00:19:17.630 | 70.00th=[14877], 80.00th=[15401], 90.00th=[16319], 95.00th=[17433], 00:19:17.630 | 99.00th=[69731], 99.50th=[70779], 99.90th=[72877], 99.95th=[72877], 00:19:17.630 | 99.99th=[73925] 00:19:17.630 bw ( KiB/s): min=16384, max=17872, per=29.72%, avg=17128.00, stdev=1052.17, samples=2 00:19:17.630 iops : min= 4096, max= 4468, avg=4282.00, stdev=263.04, samples=2 00:19:17.630 lat (usec) : 500=0.01% 00:19:17.630 lat (msec) : 10=3.10%, 20=94.74%, 50=0.65%, 100=1.49% 00:19:17.630 cpu : usr=6.48%, sys=9.77%, ctx=415, majf=0, minf=11 00:19:17.630 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:19:17.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:17.630 issued rwts: total=4096,4410,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.630 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:17.630 job3: (groupid=0, jobs=1): err= 0: pid=743355: Sun Jul 14 09:29:01 2024 00:19:17.630 read: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec) 00:19:17.630 slat (usec): min=2, max=24303, avg=115.33, stdev=786.92 00:19:17.630 clat (usec): min=4081, max=46842, avg=14462.21, stdev=7127.89 00:19:17.630 lat (usec): min=4091, max=46853, avg=14577.53, stdev=7174.52 00:19:17.630 clat percentiles (usec): 00:19:17.630 | 1.00th=[ 4293], 5.00th=[ 6587], 10.00th=[ 7963], 20.00th=[10290], 00:19:17.630 | 30.00th=[11731], 40.00th=[12387], 50.00th=[12649], 60.00th=[12911], 00:19:17.630 | 70.00th=[13566], 80.00th=[17695], 90.00th=[25560], 95.00th=[28967], 00:19:17.630 | 99.00th=[42730], 99.50th=[46924], 99.90th=[46924], 99.95th=[46924], 00:19:17.630 | 99.99th=[46924] 00:19:17.630 write: IOPS=4226, BW=16.5MiB/s (17.3MB/s)(16.6MiB/1007msec); 0 zone resets 00:19:17.630 slat (usec): min=4, max=16054, avg=115.52, stdev=715.77 00:19:17.630 clat (usec): min=333, max=75616, avg=15957.87, stdev=9205.58 00:19:17.630 lat (usec): min=4400, max=75626, avg=16073.39, stdev=9234.19 00:19:17.630 clat percentiles (usec): 00:19:17.630 | 1.00th=[ 4555], 5.00th=[ 8455], 10.00th=[ 9372], 20.00th=[11076], 00:19:17.630 | 30.00th=[11731], 40.00th=[12649], 50.00th=[13566], 60.00th=[14353], 00:19:17.631 | 70.00th=[15533], 80.00th=[17695], 90.00th=[26084], 95.00th=[33817], 00:19:17.631 | 99.00th=[66323], 99.50th=[70779], 99.90th=[76022], 99.95th=[76022], 00:19:17.631 | 99.99th=[76022] 00:19:17.631 bw ( KiB/s): min=14856, max=18168, per=28.65%, avg=16512.00, stdev=2341.94, samples=2 00:19:17.631 iops : min= 3714, max= 4542, avg=4128.00, stdev=585.48, samples=2 00:19:17.631 lat (usec) : 500=0.01% 00:19:17.631 lat (msec) : 10=15.98%, 20=66.73%, 50=16.53%, 100=0.74% 00:19:17.631 cpu : usr=4.27%, sys=8.75%, ctx=385, majf=0, minf=11 00:19:17.631 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:17.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.631 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:17.631 issued rwts: total=4096,4256,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.631 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:17.631 00:19:17.631 Run status group 0 (all jobs): 00:19:17.631 READ: bw=52.8MiB/s (55.4MB/s), 8079KiB/s-15.9MiB/s (8273kB/s-16.7MB/s), io=53.6MiB (56.2MB), run=1004-1014msec 00:19:17.631 WRITE: bw=56.3MiB/s (59.0MB/s), 9302KiB/s-17.2MiB/s (9525kB/s-18.0MB/s), io=57.1MiB (59.8MB), run=1004-1014msec 00:19:17.631 00:19:17.631 Disk stats (read/write): 00:19:17.631 nvme0n1: ios=2825/3072, merge=0/0, ticks=28913/28230, in_queue=57143, util=84.27% 00:19:17.631 nvme0n2: ios=1559/2007, merge=0/0, ticks=28631/62547, in_queue=91178, util=91.26% 00:19:17.631 nvme0n3: ios=3094/3302, merge=0/0, ticks=19945/15096, in_queue=35041, util=93.30% 00:19:17.631 nvme0n4: ios=3641/3696, merge=0/0, ticks=20070/22712, in_queue=42782, util=93.96% 00:19:17.631 09:29:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:17.631 [global] 00:19:17.631 thread=1 00:19:17.631 invalidate=1 00:19:17.631 rw=randwrite 00:19:17.631 time_based=1 00:19:17.631 runtime=1 00:19:17.631 ioengine=libaio 00:19:17.631 direct=1 00:19:17.631 bs=4096 00:19:17.631 iodepth=128 00:19:17.631 norandommap=0 00:19:17.631 numjobs=1 00:19:17.631 00:19:17.631 verify_dump=1 00:19:17.631 verify_backlog=512 00:19:17.631 verify_state_save=0 00:19:17.631 do_verify=1 00:19:17.631 verify=crc32c-intel 00:19:17.631 [job0] 00:19:17.631 filename=/dev/nvme0n1 00:19:17.631 [job1] 00:19:17.631 filename=/dev/nvme0n2 00:19:17.631 [job2] 00:19:17.631 filename=/dev/nvme0n3 00:19:17.631 [job3] 00:19:17.631 filename=/dev/nvme0n4 00:19:17.631 Could not set queue depth (nvme0n1) 00:19:17.631 Could not set queue depth (nvme0n2) 00:19:17.631 Could not set queue depth (nvme0n3) 00:19:17.631 Could not set queue depth (nvme0n4) 00:19:17.890 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:17.890 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:17.890 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:17.890 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:17.890 fio-3.35 00:19:17.890 Starting 4 threads 00:19:19.293 00:19:19.293 job0: (groupid=0, jobs=1): err= 0: pid=743577: Sun Jul 14 09:29:03 2024 00:19:19.293 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:19:19.293 slat (usec): min=2, max=10560, avg=106.56, stdev=516.57 00:19:19.293 clat (usec): min=6507, max=22965, avg=14194.44, stdev=3185.98 00:19:19.293 lat (usec): min=6512, max=23000, avg=14301.00, stdev=3190.38 00:19:19.293 clat percentiles (usec): 00:19:19.293 | 1.00th=[ 7701], 5.00th=[ 9503], 10.00th=[10683], 20.00th=[12125], 00:19:19.293 | 30.00th=[12518], 40.00th=[13042], 50.00th=[13566], 60.00th=[14091], 00:19:19.293 | 70.00th=[15008], 80.00th=[16909], 90.00th=[19006], 95.00th=[20579], 00:19:19.293 | 99.00th=[22414], 99.50th=[22676], 99.90th=[22938], 99.95th=[22938], 00:19:19.293 | 99.99th=[22938] 00:19:19.293 write: IOPS=4019, BW=15.7MiB/s (16.5MB/s)(15.8MiB/1003msec); 0 zone resets 00:19:19.293 slat (usec): min=3, max=17918, avg=144.89, stdev=714.99 00:19:19.293 clat (usec): min=669, max=41066, avg=18790.67, stdev=4586.03 00:19:19.293 lat (usec): min=4601, max=41074, avg=18935.56, stdev=4585.74 00:19:19.293 clat percentiles (usec): 00:19:19.293 | 1.00th=[ 7439], 5.00th=[10683], 10.00th=[13566], 20.00th=[16188], 00:19:19.293 | 30.00th=[17171], 40.00th=[19006], 50.00th=[19530], 60.00th=[20055], 00:19:19.293 | 70.00th=[20317], 80.00th=[20841], 90.00th=[21365], 95.00th=[23200], 00:19:19.293 | 99.00th=[35390], 99.50th=[35914], 99.90th=[41157], 99.95th=[41157], 00:19:19.293 | 99.99th=[41157] 00:19:19.293 bw ( KiB/s): min=14848, max=16384, per=30.92%, avg=15616.00, stdev=1086.12, samples=2 00:19:19.293 iops : min= 3712, max= 4096, avg=3904.00, stdev=271.53, samples=2 00:19:19.293 lat (usec) : 750=0.01% 00:19:19.293 lat (msec) : 10=5.62%, 20=71.63%, 50=22.74% 00:19:19.293 cpu : usr=3.89%, sys=7.49%, ctx=449, majf=0, minf=17 00:19:19.293 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:19.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.293 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:19.293 issued rwts: total=3584,4032,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.293 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:19.293 job1: (groupid=0, jobs=1): err= 0: pid=743578: Sun Jul 14 09:29:03 2024 00:19:19.293 read: IOPS=3089, BW=12.1MiB/s (12.7MB/s)(12.3MiB/1017msec) 00:19:19.293 slat (usec): min=3, max=11765, avg=123.39, stdev=796.23 00:19:19.293 clat (usec): min=7185, max=44900, avg=14814.43, stdev=6181.61 00:19:19.293 lat (usec): min=7603, max=44907, avg=14937.81, stdev=6242.23 00:19:19.293 clat percentiles (usec): 00:19:19.293 | 1.00th=[ 9110], 5.00th=[ 9241], 10.00th=[ 9896], 20.00th=[10814], 00:19:19.293 | 30.00th=[11863], 40.00th=[12518], 50.00th=[12780], 60.00th=[13435], 00:19:19.293 | 70.00th=[14091], 80.00th=[17433], 90.00th=[24511], 95.00th=[28181], 00:19:19.293 | 99.00th=[40109], 99.50th=[41681], 99.90th=[44827], 99.95th=[44827], 00:19:19.293 | 99.99th=[44827] 00:19:19.293 write: IOPS=3524, BW=13.8MiB/s (14.4MB/s)(14.0MiB/1017msec); 0 zone resets 00:19:19.293 slat (usec): min=3, max=16418, avg=162.81, stdev=846.77 00:19:19.293 clat (usec): min=4738, max=57230, avg=22710.20, stdev=11968.77 00:19:19.293 lat (usec): min=5222, max=57251, avg=22873.02, stdev=12051.88 00:19:19.293 clat percentiles (usec): 00:19:19.293 | 1.00th=[ 6718], 5.00th=[ 8455], 10.00th=[ 9634], 20.00th=[11863], 00:19:19.293 | 30.00th=[13042], 40.00th=[16450], 50.00th=[20055], 60.00th=[23987], 00:19:19.293 | 70.00th=[28705], 80.00th=[33817], 90.00th=[41157], 95.00th=[44827], 00:19:19.293 | 99.00th=[51119], 99.50th=[53740], 99.90th=[56886], 99.95th=[57410], 00:19:19.293 | 99.99th=[57410] 00:19:19.294 bw ( KiB/s): min=13232, max=14976, per=27.93%, avg=14104.00, stdev=1233.19, samples=2 00:19:19.294 iops : min= 3308, max= 3744, avg=3526.00, stdev=308.30, samples=2 00:19:19.294 lat (msec) : 10=10.78%, 20=56.94%, 50=31.42%, 100=0.86% 00:19:19.294 cpu : usr=3.54%, sys=5.71%, ctx=332, majf=0, minf=15 00:19:19.294 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:19:19.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:19.294 issued rwts: total=3142,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.294 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:19.294 job2: (groupid=0, jobs=1): err= 0: pid=743579: Sun Jul 14 09:29:03 2024 00:19:19.294 read: IOPS=2519, BW=9.84MiB/s (10.3MB/s)(10.0MiB/1016msec) 00:19:19.294 slat (usec): min=2, max=18148, avg=161.93, stdev=966.21 00:19:19.294 clat (usec): min=11480, max=61160, avg=19184.65, stdev=7039.82 00:19:19.294 lat (usec): min=11489, max=61175, avg=19346.58, stdev=7137.83 00:19:19.294 clat percentiles (usec): 00:19:19.294 | 1.00th=[11731], 5.00th=[13829], 10.00th=[14615], 20.00th=[15270], 00:19:19.294 | 30.00th=[15401], 40.00th=[15664], 50.00th=[16188], 60.00th=[17433], 00:19:19.294 | 70.00th=[18220], 80.00th=[22938], 90.00th=[28967], 95.00th=[34866], 00:19:19.294 | 99.00th=[42206], 99.50th=[55313], 99.90th=[61080], 99.95th=[61080], 00:19:19.294 | 99.99th=[61080] 00:19:19.294 write: IOPS=2621, BW=10.2MiB/s (10.7MB/s)(10.4MiB/1016msec); 0 zone resets 00:19:19.294 slat (usec): min=3, max=15006, avg=208.42, stdev=1013.28 00:19:19.294 clat (usec): min=2808, max=96078, avg=29912.47, stdev=20980.88 00:19:19.294 lat (usec): min=2821, max=96086, avg=30120.88, stdev=21124.91 00:19:19.294 clat percentiles (usec): 00:19:19.294 | 1.00th=[ 9372], 5.00th=[12649], 10.00th=[12911], 20.00th=[13829], 00:19:19.294 | 30.00th=[17433], 40.00th=[21103], 50.00th=[23987], 60.00th=[25822], 00:19:19.294 | 70.00th=[28705], 80.00th=[33817], 90.00th=[65799], 95.00th=[85459], 00:19:19.294 | 99.00th=[91751], 99.50th=[94897], 99.90th=[95945], 99.95th=[95945], 00:19:19.294 | 99.99th=[95945] 00:19:19.294 bw ( KiB/s): min= 8192, max=12288, per=20.28%, avg=10240.00, stdev=2896.31, samples=2 00:19:19.294 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:19:19.294 lat (msec) : 4=0.31%, 10=0.54%, 20=54.36%, 50=36.42%, 100=8.39% 00:19:19.294 cpu : usr=3.15%, sys=4.93%, ctx=270, majf=0, minf=11 00:19:19.294 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:19.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:19.294 issued rwts: total=2560,2663,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.294 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:19.294 job3: (groupid=0, jobs=1): err= 0: pid=743580: Sun Jul 14 09:29:03 2024 00:19:19.294 read: IOPS=2240, BW=8960KiB/s (9175kB/s)(9032KiB/1008msec) 00:19:19.294 slat (usec): min=3, max=15478, avg=181.05, stdev=1058.61 00:19:19.294 clat (usec): min=6868, max=71492, avg=22436.43, stdev=12270.47 00:19:19.294 lat (usec): min=6882, max=80375, avg=22617.49, stdev=12397.91 00:19:19.294 clat percentiles (usec): 00:19:19.294 | 1.00th=[ 7635], 5.00th=[10290], 10.00th=[11207], 20.00th=[11994], 00:19:19.294 | 30.00th=[12649], 40.00th=[14484], 50.00th=[17695], 60.00th=[22938], 00:19:19.294 | 70.00th=[28967], 80.00th=[33162], 90.00th=[35914], 95.00th=[45351], 00:19:19.294 | 99.00th=[65274], 99.50th=[65799], 99.90th=[71828], 99.95th=[71828], 00:19:19.294 | 99.99th=[71828] 00:19:19.294 write: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec); 0 zone resets 00:19:19.294 slat (usec): min=4, max=9219, avg=221.53, stdev=846.84 00:19:19.294 clat (usec): min=4960, max=76169, avg=30010.90, stdev=17223.64 00:19:19.294 lat (usec): min=4968, max=76194, avg=30232.42, stdev=17328.10 00:19:19.294 clat percentiles (usec): 00:19:19.294 | 1.00th=[ 8356], 5.00th=[10290], 10.00th=[11600], 20.00th=[13304], 00:19:19.294 | 30.00th=[14877], 40.00th=[18482], 50.00th=[28443], 60.00th=[32375], 00:19:19.294 | 70.00th=[39060], 80.00th=[44827], 90.00th=[55313], 95.00th=[64226], 00:19:19.294 | 99.00th=[72877], 99.50th=[73925], 99.90th=[76022], 99.95th=[76022], 00:19:19.294 | 99.99th=[76022] 00:19:19.294 bw ( KiB/s): min= 8072, max=12408, per=20.28%, avg=10240.00, stdev=3066.02, samples=2 00:19:19.294 iops : min= 2018, max= 3102, avg=2560.00, stdev=766.50, samples=2 00:19:19.294 lat (msec) : 10=3.82%, 20=42.24%, 50=44.33%, 100=9.61% 00:19:19.294 cpu : usr=3.48%, sys=4.57%, ctx=359, majf=0, minf=7 00:19:19.294 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:19:19.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:19.294 issued rwts: total=2258,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.294 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:19.294 00:19:19.294 Run status group 0 (all jobs): 00:19:19.294 READ: bw=44.3MiB/s (46.5MB/s), 8960KiB/s-14.0MiB/s (9175kB/s-14.6MB/s), io=45.1MiB (47.3MB), run=1003-1017msec 00:19:19.294 WRITE: bw=49.3MiB/s (51.7MB/s), 9.92MiB/s-15.7MiB/s (10.4MB/s-16.5MB/s), io=50.2MiB (52.6MB), run=1003-1017msec 00:19:19.294 00:19:19.294 Disk stats (read/write): 00:19:19.294 nvme0n1: ios=3122/3501, merge=0/0, ticks=13011/18521, in_queue=31532, util=89.98% 00:19:19.294 nvme0n2: ios=3118/3095, merge=0/0, ticks=38372/51108, in_queue=89480, util=98.27% 00:19:19.294 nvme0n3: ios=1944/2048, merge=0/0, ticks=19811/31954, in_queue=51765, util=91.77% 00:19:19.294 nvme0n4: ios=2105/2191, merge=0/0, ticks=22688/29464, in_queue=52152, util=96.74% 00:19:19.294 09:29:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:19:19.294 09:29:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=743716 00:19:19.294 09:29:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:19.294 09:29:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:19:19.294 [global] 00:19:19.294 thread=1 00:19:19.294 invalidate=1 00:19:19.294 rw=read 00:19:19.294 time_based=1 00:19:19.294 runtime=10 00:19:19.294 ioengine=libaio 00:19:19.294 direct=1 00:19:19.294 bs=4096 00:19:19.294 iodepth=1 00:19:19.294 norandommap=1 00:19:19.294 numjobs=1 00:19:19.294 00:19:19.294 [job0] 00:19:19.294 filename=/dev/nvme0n1 00:19:19.294 [job1] 00:19:19.294 filename=/dev/nvme0n2 00:19:19.294 [job2] 00:19:19.294 filename=/dev/nvme0n3 00:19:19.294 [job3] 00:19:19.294 filename=/dev/nvme0n4 00:19:19.294 Could not set queue depth (nvme0n1) 00:19:19.294 Could not set queue depth (nvme0n2) 00:19:19.294 Could not set queue depth (nvme0n3) 00:19:19.294 Could not set queue depth (nvme0n4) 00:19:19.294 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:19.294 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:19.294 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:19.294 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:19.294 fio-3.35 00:19:19.294 Starting 4 threads 00:19:22.574 09:29:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:22.574 09:29:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:22.574 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=24543232, buflen=4096 00:19:22.574 fio: pid=743817, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:22.574 09:29:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:22.574 09:29:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:22.574 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=2289664, buflen=4096 00:19:22.574 fio: pid=743816, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:22.831 09:29:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:22.831 09:29:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:22.831 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=929792, buflen=4096 00:19:22.831 fio: pid=743812, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:23.110 09:29:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:23.110 09:29:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:23.110 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=19009536, buflen=4096 00:19:23.110 fio: pid=743813, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:23.110 00:19:23.110 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=743812: Sun Jul 14 09:29:07 2024 00:19:23.110 read: IOPS=66, BW=265KiB/s (272kB/s)(908KiB/3424msec) 00:19:23.110 slat (usec): min=7, max=15835, avg=139.89, stdev=1345.14 00:19:23.110 clat (usec): min=342, max=46417, avg=14838.16, stdev=19387.86 00:19:23.110 lat (usec): min=350, max=56969, avg=14978.52, stdev=19607.22 00:19:23.110 clat percentiles (usec): 00:19:23.110 | 1.00th=[ 441], 5.00th=[ 474], 10.00th=[ 482], 20.00th=[ 502], 00:19:23.110 | 30.00th=[ 515], 40.00th=[ 545], 50.00th=[ 644], 60.00th=[ 816], 00:19:23.110 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:23.110 | 99.00th=[41157], 99.50th=[41681], 99.90th=[46400], 99.95th=[46400], 00:19:23.110 | 99.99th=[46400] 00:19:23.110 bw ( KiB/s): min= 96, max= 1224, per=2.34%, avg=289.33, stdev=457.99, samples=6 00:19:23.110 iops : min= 24, max= 306, avg=72.33, stdev=114.50, samples=6 00:19:23.110 lat (usec) : 500=19.74%, 750=35.96%, 1000=8.77% 00:19:23.110 lat (msec) : 50=35.09% 00:19:23.110 cpu : usr=0.00%, sys=0.18%, ctx=230, majf=0, minf=1 00:19:23.110 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:23.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.110 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.110 issued rwts: total=228,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.110 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:23.110 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=743813: Sun Jul 14 09:29:07 2024 00:19:23.110 read: IOPS=1257, BW=5030KiB/s (5150kB/s)(18.1MiB/3691msec) 00:19:23.110 slat (usec): min=5, max=9832, avg=16.57, stdev=144.31 00:19:23.110 clat (usec): min=370, max=44163, avg=769.90, stdev=2962.66 00:19:23.110 lat (usec): min=384, max=51986, avg=786.47, stdev=2996.50 00:19:23.110 clat percentiles (usec): 00:19:23.110 | 1.00th=[ 408], 5.00th=[ 420], 10.00th=[ 433], 20.00th=[ 482], 00:19:23.110 | 30.00th=[ 494], 40.00th=[ 506], 50.00th=[ 515], 60.00th=[ 537], 00:19:23.110 | 70.00th=[ 586], 80.00th=[ 635], 90.00th=[ 734], 95.00th=[ 824], 00:19:23.110 | 99.00th=[ 938], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:19:23.110 | 99.99th=[44303] 00:19:23.110 bw ( KiB/s): min= 93, max= 7192, per=42.82%, avg=5299.00, stdev=2674.38, samples=7 00:19:23.110 iops : min= 23, max= 1798, avg=1324.71, stdev=668.68, samples=7 00:19:23.110 lat (usec) : 500=34.98%, 750=55.79%, 1000=8.57% 00:19:23.110 lat (msec) : 2=0.09%, 10=0.02%, 50=0.52% 00:19:23.110 cpu : usr=1.11%, sys=2.85%, ctx=4646, majf=0, minf=1 00:19:23.110 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:23.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.110 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.110 issued rwts: total=4642,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.110 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:23.110 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=743816: Sun Jul 14 09:29:07 2024 00:19:23.110 read: IOPS=176, BW=707KiB/s (724kB/s)(2236KiB/3164msec) 00:19:23.110 slat (nsec): min=6194, max=70750, avg=23633.47, stdev=11369.57 00:19:23.110 clat (usec): min=422, max=41571, avg=5581.30, stdev=13410.66 00:19:23.110 lat (usec): min=456, max=41589, avg=5604.92, stdev=13410.07 00:19:23.110 clat percentiles (usec): 00:19:23.110 | 1.00th=[ 441], 5.00th=[ 457], 10.00th=[ 474], 20.00th=[ 486], 00:19:23.110 | 30.00th=[ 494], 40.00th=[ 506], 50.00th=[ 515], 60.00th=[ 523], 00:19:23.110 | 70.00th=[ 537], 80.00th=[ 553], 90.00th=[41157], 95.00th=[41157], 00:19:23.110 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:19:23.110 | 99.99th=[41681] 00:19:23.110 bw ( KiB/s): min= 96, max= 3952, per=5.98%, avg=740.00, stdev=1573.56, samples=6 00:19:23.110 iops : min= 24, max= 988, avg=185.00, stdev=393.39, samples=6 00:19:23.110 lat (usec) : 500=35.71%, 750=51.07%, 1000=0.36% 00:19:23.110 lat (msec) : 2=0.18%, 50=12.50% 00:19:23.110 cpu : usr=0.22%, sys=0.44%, ctx=562, majf=0, minf=1 00:19:23.110 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:23.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.110 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.110 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.110 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:23.110 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=743817: Sun Jul 14 09:29:07 2024 00:19:23.110 read: IOPS=2053, BW=8214KiB/s (8411kB/s)(23.4MiB/2918msec) 00:19:23.110 slat (nsec): min=5683, max=59677, avg=11305.63, stdev=5582.66 00:19:23.110 clat (usec): min=335, max=1562, avg=468.27, stdev=69.08 00:19:23.110 lat (usec): min=344, max=1594, avg=479.57, stdev=69.53 00:19:23.110 clat percentiles (usec): 00:19:23.110 | 1.00th=[ 347], 5.00th=[ 355], 10.00th=[ 363], 20.00th=[ 383], 00:19:23.110 | 30.00th=[ 457], 40.00th=[ 478], 50.00th=[ 490], 60.00th=[ 498], 00:19:23.110 | 70.00th=[ 506], 80.00th=[ 519], 90.00th=[ 537], 95.00th=[ 553], 00:19:23.110 | 99.00th=[ 603], 99.50th=[ 627], 99.90th=[ 709], 99.95th=[ 873], 00:19:23.110 | 99.99th=[ 1565] 00:19:23.110 bw ( KiB/s): min= 7664, max= 9016, per=64.74%, avg=8011.20, stdev=575.74, samples=5 00:19:23.110 iops : min= 1916, max= 2254, avg=2002.80, stdev=143.93, samples=5 00:19:23.110 lat (usec) : 500=61.71%, 750=38.21%, 1000=0.05% 00:19:23.110 lat (msec) : 2=0.02% 00:19:23.110 cpu : usr=1.65%, sys=3.50%, ctx=5994, majf=0, minf=1 00:19:23.110 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:23.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.110 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.110 issued rwts: total=5993,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.110 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:23.110 00:19:23.110 Run status group 0 (all jobs): 00:19:23.110 READ: bw=12.1MiB/s (12.7MB/s), 265KiB/s-8214KiB/s (272kB/s-8411kB/s), io=44.6MiB (46.8MB), run=2918-3691msec 00:19:23.110 00:19:23.110 Disk stats (read/write): 00:19:23.110 nvme0n1: ios=225/0, merge=0/0, ticks=3281/0, in_queue=3281, util=95.14% 00:19:23.110 nvme0n2: ios=4639/0, merge=0/0, ticks=3444/0, in_queue=3444, util=96.28% 00:19:23.110 nvme0n3: ios=610/0, merge=0/0, ticks=4171/0, in_queue=4171, util=99.63% 00:19:23.110 nvme0n4: ios=5941/0, merge=0/0, ticks=4063/0, in_queue=4063, util=99.59% 00:19:23.367 09:29:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:23.367 09:29:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:23.625 09:29:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:23.625 09:29:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:23.883 09:29:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:23.883 09:29:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:24.141 09:29:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:24.141 09:29:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:24.399 09:29:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:19:24.399 09:29:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 743716 00:19:24.399 09:29:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:19:24.399 09:29:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:24.399 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:24.399 09:29:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:24.399 09:29:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:19:24.399 09:29:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:24.657 09:29:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:24.657 09:29:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:24.657 09:29:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:24.657 09:29:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:19:24.657 09:29:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:24.657 09:29:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:24.657 nvmf hotplug test: fio failed as expected 00:19:24.657 09:29:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:24.916 09:29:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:24.916 09:29:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:24.916 09:29:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:24.916 09:29:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:24.917 09:29:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:19:24.917 09:29:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:24.917 09:29:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:19:24.917 09:29:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:24.917 09:29:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:19:24.917 09:29:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:24.917 09:29:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:24.917 rmmod nvme_tcp 00:19:24.917 rmmod nvme_fabrics 00:19:24.917 rmmod nvme_keyring 00:19:24.917 09:29:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:24.917 09:29:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:19:24.917 09:29:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:19:24.917 09:29:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 741701 ']' 00:19:24.917 09:29:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 741701 00:19:24.917 09:29:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 741701 ']' 00:19:24.917 09:29:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 741701 00:19:24.917 09:29:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:19:24.917 09:29:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:24.917 09:29:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 741701 00:19:24.917 09:29:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:24.917 09:29:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:24.917 09:29:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 741701' 00:19:24.917 killing process with pid 741701 00:19:24.917 09:29:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 741701 00:19:24.917 09:29:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 741701 00:19:25.176 09:29:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:25.176 09:29:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:25.176 09:29:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:25.176 09:29:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:25.176 09:29:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:25.176 09:29:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:25.176 09:29:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:25.176 09:29:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:27.077 09:29:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:27.077 00:19:27.077 real 0m23.389s 00:19:27.077 user 1m20.132s 00:19:27.077 sys 0m7.025s 00:19:27.077 09:29:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:27.077 09:29:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.077 ************************************ 00:19:27.077 END TEST nvmf_fio_target 00:19:27.077 ************************************ 00:19:27.077 09:29:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:27.077 09:29:11 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:27.077 09:29:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:27.077 09:29:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:27.077 09:29:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:27.077 ************************************ 00:19:27.077 START TEST nvmf_bdevio 00:19:27.077 ************************************ 00:19:27.077 09:29:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:27.334 * Looking for test storage... 00:19:27.335 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:19:27.335 09:29:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:29.236 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:29.236 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:29.236 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:29.236 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:29.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:29.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:19:29.236 00:19:29.236 --- 10.0.0.2 ping statistics --- 00:19:29.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.236 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:29.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:29.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:19:29.236 00:19:29.236 --- 10.0.0.1 ping statistics --- 00:19:29.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.236 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:29.236 09:29:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:29.237 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:29.237 09:29:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:29.237 09:29:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:29.237 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=746426 00:19:29.237 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:29.237 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 746426 00:19:29.237 09:29:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 746426 ']' 00:19:29.237 09:29:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:29.237 09:29:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:29.237 09:29:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:29.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:29.237 09:29:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:29.237 09:29:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:29.237 [2024-07-14 09:29:13.604760] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:19:29.237 [2024-07-14 09:29:13.604824] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:29.237 EAL: No free 2048 kB hugepages reported on node 1 00:19:29.237 [2024-07-14 09:29:13.672391] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:29.494 [2024-07-14 09:29:13.760885] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:29.494 [2024-07-14 09:29:13.760941] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:29.494 [2024-07-14 09:29:13.760965] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:29.494 [2024-07-14 09:29:13.760975] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:29.494 [2024-07-14 09:29:13.760985] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:29.494 [2024-07-14 09:29:13.761068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:29.494 [2024-07-14 09:29:13.761127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:19:29.494 [2024-07-14 09:29:13.761258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:19:29.495 [2024-07-14 09:29:13.761261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:29.495 09:29:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:29.495 09:29:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:19:29.495 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:29.495 09:29:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:29.495 09:29:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:29.495 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:29.495 09:29:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:29.495 09:29:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.495 09:29:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:29.495 [2024-07-14 09:29:13.902547] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:29.495 09:29:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.495 09:29:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:29.495 09:29:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.495 09:29:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:29.495 Malloc0 00:19:29.495 09:29:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.495 09:29:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:29.495 09:29:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.495 09:29:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:29.495 09:29:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.495 09:29:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:29.495 09:29:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.495 09:29:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:29.753 09:29:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.753 09:29:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:29.753 09:29:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.753 09:29:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:29.753 [2024-07-14 09:29:13.953800] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:29.753 09:29:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.753 09:29:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:29.753 09:29:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:29.753 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:19:29.753 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:19:29.753 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:29.753 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:29.753 { 00:19:29.753 "params": { 00:19:29.753 "name": "Nvme$subsystem", 00:19:29.753 "trtype": "$TEST_TRANSPORT", 00:19:29.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:29.753 "adrfam": "ipv4", 00:19:29.753 "trsvcid": "$NVMF_PORT", 00:19:29.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:29.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:29.753 "hdgst": ${hdgst:-false}, 00:19:29.753 "ddgst": ${ddgst:-false} 00:19:29.753 }, 00:19:29.753 "method": "bdev_nvme_attach_controller" 00:19:29.753 } 00:19:29.753 EOF 00:19:29.753 )") 00:19:29.753 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:19:29.753 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:19:29.753 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:19:29.753 09:29:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:29.753 "params": { 00:19:29.753 "name": "Nvme1", 00:19:29.753 "trtype": "tcp", 00:19:29.753 "traddr": "10.0.0.2", 00:19:29.753 "adrfam": "ipv4", 00:19:29.753 "trsvcid": "4420", 00:19:29.753 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:29.753 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:29.753 "hdgst": false, 00:19:29.753 "ddgst": false 00:19:29.753 }, 00:19:29.753 "method": "bdev_nvme_attach_controller" 00:19:29.753 }' 00:19:29.753 [2024-07-14 09:29:13.997507] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:19:29.754 [2024-07-14 09:29:13.997597] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid746461 ] 00:19:29.754 EAL: No free 2048 kB hugepages reported on node 1 00:19:29.754 [2024-07-14 09:29:14.061745] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:29.754 [2024-07-14 09:29:14.150922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:29.754 [2024-07-14 09:29:14.150974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:29.754 [2024-07-14 09:29:14.150977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.319 I/O targets: 00:19:30.319 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:30.319 00:19:30.319 00:19:30.319 CUnit - A unit testing framework for C - Version 2.1-3 00:19:30.319 http://cunit.sourceforge.net/ 00:19:30.319 00:19:30.319 00:19:30.319 Suite: bdevio tests on: Nvme1n1 00:19:30.319 Test: blockdev write read block ...passed 00:19:30.319 Test: blockdev write zeroes read block ...passed 00:19:30.319 Test: blockdev write zeroes read no split ...passed 00:19:30.319 Test: blockdev write zeroes read split ...passed 00:19:30.319 Test: blockdev write zeroes read split partial ...passed 00:19:30.319 Test: blockdev reset ...[2024-07-14 09:29:14.699796] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:30.319 [2024-07-14 09:29:14.699920] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1594a60 (9): Bad file descriptor 00:19:30.319 [2024-07-14 09:29:14.717778] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:30.319 passed 00:19:30.319 Test: blockdev write read 8 blocks ...passed 00:19:30.319 Test: blockdev write read size > 128k ...passed 00:19:30.319 Test: blockdev write read invalid size ...passed 00:19:30.577 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:30.577 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:30.577 Test: blockdev write read max offset ...passed 00:19:30.577 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:30.577 Test: blockdev writev readv 8 blocks ...passed 00:19:30.577 Test: blockdev writev readv 30 x 1block ...passed 00:19:30.577 Test: blockdev writev readv block ...passed 00:19:30.577 Test: blockdev writev readv size > 128k ...passed 00:19:30.577 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:30.577 Test: blockdev comparev and writev ...[2024-07-14 09:29:14.937912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:30.577 [2024-07-14 09:29:14.937947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.577 [2024-07-14 09:29:14.937972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:30.577 [2024-07-14 09:29:14.937989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:30.577 [2024-07-14 09:29:14.938428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:30.577 [2024-07-14 09:29:14.938454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:30.577 [2024-07-14 09:29:14.938477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:30.577 [2024-07-14 09:29:14.938494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:30.577 [2024-07-14 09:29:14.938923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:30.577 [2024-07-14 09:29:14.938949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:30.577 [2024-07-14 09:29:14.938972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:30.577 [2024-07-14 09:29:14.938989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:30.577 [2024-07-14 09:29:14.939427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:30.577 [2024-07-14 09:29:14.939452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:30.577 [2024-07-14 09:29:14.939475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:30.577 [2024-07-14 09:29:14.939491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:30.577 passed 00:19:30.577 Test: blockdev nvme passthru rw ...passed 00:19:30.577 Test: blockdev nvme passthru vendor specific ...[2024-07-14 09:29:15.023337] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:30.577 [2024-07-14 09:29:15.023366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:30.577 [2024-07-14 09:29:15.023587] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:30.577 [2024-07-14 09:29:15.023612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:30.577 [2024-07-14 09:29:15.023827] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:30.577 [2024-07-14 09:29:15.023851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:30.577 [2024-07-14 09:29:15.024074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:30.577 [2024-07-14 09:29:15.024097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:30.577 passed 00:19:30.835 Test: blockdev nvme admin passthru ...passed 00:19:30.835 Test: blockdev copy ...passed 00:19:30.835 00:19:30.835 Run Summary: Type Total Ran Passed Failed Inactive 00:19:30.835 suites 1 1 n/a 0 0 00:19:30.835 tests 23 23 23 0 0 00:19:30.835 asserts 152 152 152 0 n/a 00:19:30.835 00:19:30.835 Elapsed time = 1.190 seconds 00:19:30.835 09:29:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:30.835 09:29:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.835 09:29:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:30.835 09:29:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.835 09:29:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:30.835 09:29:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:19:30.835 09:29:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:30.835 09:29:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:19:30.835 09:29:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:30.835 09:29:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:19:30.835 09:29:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:30.835 09:29:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:31.092 rmmod nvme_tcp 00:19:31.092 rmmod nvme_fabrics 00:19:31.092 rmmod nvme_keyring 00:19:31.092 09:29:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:31.092 09:29:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:19:31.092 09:29:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:19:31.092 09:29:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 746426 ']' 00:19:31.092 09:29:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 746426 00:19:31.092 09:29:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 746426 ']' 00:19:31.092 09:29:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 746426 00:19:31.092 09:29:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:19:31.092 09:29:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:31.092 09:29:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 746426 00:19:31.092 09:29:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:19:31.093 09:29:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:19:31.093 09:29:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 746426' 00:19:31.093 killing process with pid 746426 00:19:31.093 09:29:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 746426 00:19:31.093 09:29:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 746426 00:19:31.376 09:29:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:31.376 09:29:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:31.376 09:29:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:31.376 09:29:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:31.376 09:29:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:31.376 09:29:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.376 09:29:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:31.376 09:29:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.279 09:29:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:33.279 00:19:33.279 real 0m6.129s 00:19:33.279 user 0m10.429s 00:19:33.279 sys 0m1.968s 00:19:33.279 09:29:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:33.279 09:29:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:33.279 ************************************ 00:19:33.279 END TEST nvmf_bdevio 00:19:33.279 ************************************ 00:19:33.279 09:29:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:33.279 09:29:17 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:33.279 09:29:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:33.279 09:29:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:33.279 09:29:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:33.279 ************************************ 00:19:33.279 START TEST nvmf_auth_target 00:19:33.279 ************************************ 00:19:33.279 09:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:33.557 * Looking for test storage... 00:19:33.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:33.557 09:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:33.557 09:29:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:33.557 09:29:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:33.557 09:29:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:33.557 09:29:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:33.557 09:29:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:33.557 09:29:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:33.557 09:29:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:33.557 09:29:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:33.557 09:29:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:33.557 09:29:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:33.557 09:29:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:33.557 09:29:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:33.557 09:29:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:33.558 09:29:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:33.558 09:29:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:33.558 09:29:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:33.558 09:29:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:33.558 09:29:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:33.558 09:29:17 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:33.558 09:29:17 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:33.558 09:29:17 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:33.558 09:29:17 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.558 09:29:17 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.558 09:29:17 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.558 09:29:17 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:33.558 09:29:17 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.558 09:29:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:19:33.558 09:29:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:33.558 09:29:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:33.558 09:29:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:33.558 09:29:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:33.558 09:29:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:33.558 09:29:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:33.558 09:29:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:33.558 09:29:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:33.558 09:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:33.558 09:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:33.558 09:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:33.558 09:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:33.558 09:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:33.558 09:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:33.558 09:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:33.558 09:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:19:33.558 09:29:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:33.558 09:29:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:33.558 09:29:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:33.558 09:29:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:33.558 09:29:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:33.558 09:29:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.558 09:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:33.558 09:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.558 09:29:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:33.558 09:29:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:33.558 09:29:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:33.558 09:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:35.469 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:35.469 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:35.469 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:35.469 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:35.469 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:35.470 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:35.470 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:35.470 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:35.470 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:35.470 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:35.470 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:35.470 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:35.470 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:35.470 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:19:35.470 00:19:35.470 --- 10.0.0.2 ping statistics --- 00:19:35.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:35.470 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:19:35.470 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:35.470 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:35.470 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:19:35.470 00:19:35.470 --- 10.0.0.1 ping statistics --- 00:19:35.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:35.470 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:19:35.470 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:35.470 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:19:35.470 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:35.470 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:35.470 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:35.470 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:35.470 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:35.470 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:35.470 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:35.470 09:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:19:35.470 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:35.470 09:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:35.470 09:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.470 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=748530 00:19:35.470 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:35.470 09:29:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 748530 00:19:35.470 09:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 748530 ']' 00:19:35.470 09:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.470 09:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:35.470 09:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.470 09:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:35.470 09:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.728 09:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:35.729 09:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:35.729 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:35.729 09:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:35.729 09:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=748664 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=745b7fd5f93e501e34a030afd78169fc57c14fc2de2eaea0 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.1O1 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 745b7fd5f93e501e34a030afd78169fc57c14fc2de2eaea0 0 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 745b7fd5f93e501e34a030afd78169fc57c14fc2de2eaea0 0 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=745b7fd5f93e501e34a030afd78169fc57c14fc2de2eaea0 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.1O1 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.1O1 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.1O1 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e3a6aba226cc867a7d0625c00b20a477c436c2ef19c04d851fd740e5ca3cf5ad 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.gZK 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e3a6aba226cc867a7d0625c00b20a477c436c2ef19c04d851fd740e5ca3cf5ad 3 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e3a6aba226cc867a7d0625c00b20a477c436c2ef19c04d851fd740e5ca3cf5ad 3 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e3a6aba226cc867a7d0625c00b20a477c436c2ef19c04d851fd740e5ca3cf5ad 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.gZK 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.gZK 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.gZK 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0fc3ead1f08595e8779b0850f3d9b0c3 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.32b 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0fc3ead1f08595e8779b0850f3d9b0c3 1 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0fc3ead1f08595e8779b0850f3d9b0c3 1 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0fc3ead1f08595e8779b0850f3d9b0c3 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.32b 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.32b 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.32b 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c41fe40d7f480cb75a6705e8170fffdd0422a701a7f1da5c 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ZGL 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c41fe40d7f480cb75a6705e8170fffdd0422a701a7f1da5c 2 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c41fe40d7f480cb75a6705e8170fffdd0422a701a7f1da5c 2 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c41fe40d7f480cb75a6705e8170fffdd0422a701a7f1da5c 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ZGL 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ZGL 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.ZGL 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d197f396893c380a382d437dac31755ce3438caec7fbd2bd 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.2n2 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d197f396893c380a382d437dac31755ce3438caec7fbd2bd 2 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d197f396893c380a382d437dac31755ce3438caec7fbd2bd 2 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d197f396893c380a382d437dac31755ce3438caec7fbd2bd 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:35.988 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:36.247 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.2n2 00:19:36.247 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.2n2 00:19:36.247 09:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.2n2 00:19:36.247 09:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:19:36.247 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:36.247 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:36.247 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:36.247 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:36.247 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:36.247 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:36.247 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ed959e4183fd269a0a118560d8d32478 00:19:36.247 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:36.247 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.rMD 00:19:36.247 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ed959e4183fd269a0a118560d8d32478 1 00:19:36.247 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ed959e4183fd269a0a118560d8d32478 1 00:19:36.247 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:36.247 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:36.247 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ed959e4183fd269a0a118560d8d32478 00:19:36.247 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:36.247 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:36.247 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.rMD 00:19:36.247 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.rMD 00:19:36.247 09:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.rMD 00:19:36.247 09:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:19:36.247 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:36.247 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:36.247 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:36.247 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:36.247 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:36.247 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:36.247 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d7409320cf8e9657eeca67a90f1d1a89b1ac972326342d6fba6871c8ed359186 00:19:36.247 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:36.247 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.LMJ 00:19:36.247 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d7409320cf8e9657eeca67a90f1d1a89b1ac972326342d6fba6871c8ed359186 3 00:19:36.247 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d7409320cf8e9657eeca67a90f1d1a89b1ac972326342d6fba6871c8ed359186 3 00:19:36.247 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:36.247 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:36.247 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d7409320cf8e9657eeca67a90f1d1a89b1ac972326342d6fba6871c8ed359186 00:19:36.247 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:36.247 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:36.247 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.LMJ 00:19:36.248 09:29:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.LMJ 00:19:36.248 09:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.LMJ 00:19:36.248 09:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:19:36.248 09:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 748530 00:19:36.248 09:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 748530 ']' 00:19:36.248 09:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.248 09:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:36.248 09:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.248 09:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:36.248 09:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.506 09:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:36.506 09:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:36.506 09:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 748664 /var/tmp/host.sock 00:19:36.506 09:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 748664 ']' 00:19:36.506 09:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:19:36.506 09:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:36.506 09:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:36.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:36.506 09:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:36.506 09:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.764 09:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:36.764 09:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:36.764 09:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:19:36.764 09:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.764 09:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.764 09:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.764 09:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:36.764 09:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.1O1 00:19:36.764 09:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.764 09:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.764 09:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.764 09:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.1O1 00:19:36.764 09:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.1O1 00:19:37.021 09:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.gZK ]] 00:19:37.022 09:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.gZK 00:19:37.022 09:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.022 09:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.022 09:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.022 09:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.gZK 00:19:37.022 09:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.gZK 00:19:37.279 09:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:37.279 09:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.32b 00:19:37.279 09:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.279 09:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.279 09:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.279 09:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.32b 00:19:37.279 09:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.32b 00:19:37.537 09:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.ZGL ]] 00:19:37.537 09:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ZGL 00:19:37.537 09:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.537 09:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.537 09:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.537 09:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ZGL 00:19:37.537 09:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ZGL 00:19:37.795 09:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:37.795 09:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.2n2 00:19:37.795 09:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.795 09:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.795 09:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.795 09:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.2n2 00:19:37.795 09:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.2n2 00:19:38.053 09:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.rMD ]] 00:19:38.053 09:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rMD 00:19:38.053 09:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.053 09:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.053 09:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.053 09:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rMD 00:19:38.053 09:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rMD 00:19:38.311 09:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:38.311 09:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.LMJ 00:19:38.311 09:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.311 09:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.311 09:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.311 09:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.LMJ 00:19:38.311 09:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.LMJ 00:19:38.568 09:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:19:38.568 09:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:38.568 09:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:38.569 09:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:38.569 09:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:38.569 09:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:38.826 09:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:19:38.826 09:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:38.826 09:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:38.826 09:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:38.826 09:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:38.826 09:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.826 09:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.826 09:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.826 09:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.826 09:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.826 09:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.826 09:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.083 00:19:39.083 09:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:39.083 09:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:39.083 09:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.339 09:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.339 09:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.339 09:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.339 09:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.339 09:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.339 09:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:39.339 { 00:19:39.339 "cntlid": 1, 00:19:39.339 "qid": 0, 00:19:39.339 "state": "enabled", 00:19:39.339 "thread": "nvmf_tgt_poll_group_000", 00:19:39.339 "listen_address": { 00:19:39.339 "trtype": "TCP", 00:19:39.339 "adrfam": "IPv4", 00:19:39.339 "traddr": "10.0.0.2", 00:19:39.339 "trsvcid": "4420" 00:19:39.339 }, 00:19:39.339 "peer_address": { 00:19:39.339 "trtype": "TCP", 00:19:39.339 "adrfam": "IPv4", 00:19:39.339 "traddr": "10.0.0.1", 00:19:39.339 "trsvcid": "55854" 00:19:39.339 }, 00:19:39.339 "auth": { 00:19:39.339 "state": "completed", 00:19:39.339 "digest": "sha256", 00:19:39.339 "dhgroup": "null" 00:19:39.339 } 00:19:39.339 } 00:19:39.339 ]' 00:19:39.339 09:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:39.339 09:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:39.339 09:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:39.339 09:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:39.339 09:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:39.339 09:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.339 09:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.339 09:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.596 09:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NzQ1YjdmZDVmOTNlNTAxZTM0YTAzMGFmZDc4MTY5ZmM1N2MxNGZjMmRlMmVhZWEwIyEwPg==: --dhchap-ctrl-secret DHHC-1:03:ZTNhNmFiYTIyNmNjODY3YTdkMDYyNWMwMGIyMGE0NzdjNDM2YzJlZjE5YzA0ZDg1MWZkNzQwZTVjYTNjZjVhZBT0aQU=: 00:19:40.967 09:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.967 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.967 09:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:40.967 09:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.967 09:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.967 09:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.967 09:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:40.967 09:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:40.967 09:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:40.967 09:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:19:40.967 09:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:40.967 09:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:40.967 09:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:40.967 09:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:40.967 09:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.967 09:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.967 09:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.967 09:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.967 09:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.967 09:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.967 09:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.224 00:19:41.224 09:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:41.224 09:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.224 09:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:41.481 09:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.481 09:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.481 09:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.481 09:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.481 09:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.481 09:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:41.481 { 00:19:41.481 "cntlid": 3, 00:19:41.481 "qid": 0, 00:19:41.481 "state": "enabled", 00:19:41.481 "thread": "nvmf_tgt_poll_group_000", 00:19:41.481 "listen_address": { 00:19:41.481 "trtype": "TCP", 00:19:41.481 "adrfam": "IPv4", 00:19:41.481 "traddr": "10.0.0.2", 00:19:41.481 "trsvcid": "4420" 00:19:41.481 }, 00:19:41.481 "peer_address": { 00:19:41.481 "trtype": "TCP", 00:19:41.481 "adrfam": "IPv4", 00:19:41.481 "traddr": "10.0.0.1", 00:19:41.481 "trsvcid": "55878" 00:19:41.481 }, 00:19:41.481 "auth": { 00:19:41.481 "state": "completed", 00:19:41.481 "digest": "sha256", 00:19:41.481 "dhgroup": "null" 00:19:41.481 } 00:19:41.481 } 00:19:41.481 ]' 00:19:41.481 09:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:41.737 09:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:41.737 09:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:41.737 09:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:41.737 09:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:41.738 09:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.738 09:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.738 09:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.995 09:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MGZjM2VhZDFmMDg1OTVlODc3OWIwODUwZjNkOWIwYzNGi3HN: --dhchap-ctrl-secret DHHC-1:02:YzQxZmU0MGQ3ZjQ4MGNiNzVhNjcwNWU4MTcwZmZmZGQwNDIyYTcwMWE3ZjFkYTVjZh8NIw==: 00:19:42.928 09:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.928 09:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:42.928 09:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.928 09:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.928 09:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.928 09:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:42.928 09:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:42.928 09:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:43.186 09:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:19:43.186 09:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:43.186 09:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:43.186 09:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:43.186 09:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:43.186 09:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.186 09:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.186 09:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.186 09:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.186 09:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.186 09:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.186 09:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.444 00:19:43.444 09:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:43.444 09:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:43.444 09:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.702 09:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.702 09:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.702 09:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.702 09:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.702 09:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.702 09:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:43.702 { 00:19:43.702 "cntlid": 5, 00:19:43.702 "qid": 0, 00:19:43.702 "state": "enabled", 00:19:43.702 "thread": "nvmf_tgt_poll_group_000", 00:19:43.702 "listen_address": { 00:19:43.702 "trtype": "TCP", 00:19:43.702 "adrfam": "IPv4", 00:19:43.702 "traddr": "10.0.0.2", 00:19:43.702 "trsvcid": "4420" 00:19:43.702 }, 00:19:43.702 "peer_address": { 00:19:43.702 "trtype": "TCP", 00:19:43.702 "adrfam": "IPv4", 00:19:43.702 "traddr": "10.0.0.1", 00:19:43.702 "trsvcid": "41086" 00:19:43.702 }, 00:19:43.702 "auth": { 00:19:43.702 "state": "completed", 00:19:43.702 "digest": "sha256", 00:19:43.702 "dhgroup": "null" 00:19:43.702 } 00:19:43.702 } 00:19:43.702 ]' 00:19:43.702 09:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:43.960 09:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:43.960 09:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:43.960 09:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:43.960 09:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:43.960 09:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.960 09:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.960 09:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.218 09:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDE5N2YzOTY4OTNjMzgwYTM4MmQ0MzdkYWMzMTc1NWNlMzQzOGNhZWM3ZmJkMmJkcIhjtQ==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NTllNDE4M2ZkMjY5YTBhMTE4NTYwZDhkMzI0NziXBON7: 00:19:45.152 09:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.152 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.152 09:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:45.152 09:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.152 09:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.152 09:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.152 09:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:45.152 09:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:45.152 09:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:45.409 09:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:19:45.409 09:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:45.409 09:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:45.409 09:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:45.410 09:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:45.410 09:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.410 09:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:45.410 09:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.410 09:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.410 09:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.410 09:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:45.410 09:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:45.668 00:19:45.926 09:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:45.926 09:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:45.926 09:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.926 09:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.926 09:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.926 09:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.926 09:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.183 09:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.183 09:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:46.183 { 00:19:46.183 "cntlid": 7, 00:19:46.183 "qid": 0, 00:19:46.183 "state": "enabled", 00:19:46.183 "thread": "nvmf_tgt_poll_group_000", 00:19:46.183 "listen_address": { 00:19:46.183 "trtype": "TCP", 00:19:46.183 "adrfam": "IPv4", 00:19:46.183 "traddr": "10.0.0.2", 00:19:46.183 "trsvcid": "4420" 00:19:46.183 }, 00:19:46.183 "peer_address": { 00:19:46.183 "trtype": "TCP", 00:19:46.183 "adrfam": "IPv4", 00:19:46.183 "traddr": "10.0.0.1", 00:19:46.183 "trsvcid": "41110" 00:19:46.183 }, 00:19:46.183 "auth": { 00:19:46.183 "state": "completed", 00:19:46.183 "digest": "sha256", 00:19:46.183 "dhgroup": "null" 00:19:46.183 } 00:19:46.183 } 00:19:46.183 ]' 00:19:46.183 09:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:46.183 09:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:46.183 09:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:46.183 09:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:46.183 09:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:46.183 09:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.183 09:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.183 09:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.443 09:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZDc0MDkzMjBjZjhlOTY1N2VlY2E2N2E5MGYxZDFhODliMWFjOTcyMzI2MzQyZDZmYmE2ODcxYzhlZDM1OTE4NgHa9ec=: 00:19:47.374 09:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.374 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.375 09:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:47.375 09:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.375 09:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.375 09:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.375 09:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:47.375 09:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:47.375 09:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:47.375 09:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:47.939 09:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:19:47.939 09:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:47.939 09:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:47.939 09:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:47.939 09:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:47.939 09:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.939 09:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:47.939 09:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.939 09:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.939 09:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.939 09:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:47.939 09:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.196 00:19:48.196 09:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:48.196 09:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:48.196 09:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.453 09:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.453 09:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.453 09:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.453 09:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.453 09:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.453 09:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:48.453 { 00:19:48.453 "cntlid": 9, 00:19:48.453 "qid": 0, 00:19:48.453 "state": "enabled", 00:19:48.453 "thread": "nvmf_tgt_poll_group_000", 00:19:48.453 "listen_address": { 00:19:48.453 "trtype": "TCP", 00:19:48.453 "adrfam": "IPv4", 00:19:48.453 "traddr": "10.0.0.2", 00:19:48.453 "trsvcid": "4420" 00:19:48.453 }, 00:19:48.453 "peer_address": { 00:19:48.453 "trtype": "TCP", 00:19:48.453 "adrfam": "IPv4", 00:19:48.453 "traddr": "10.0.0.1", 00:19:48.453 "trsvcid": "41144" 00:19:48.453 }, 00:19:48.453 "auth": { 00:19:48.453 "state": "completed", 00:19:48.453 "digest": "sha256", 00:19:48.453 "dhgroup": "ffdhe2048" 00:19:48.453 } 00:19:48.453 } 00:19:48.453 ]' 00:19:48.453 09:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:48.453 09:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:48.453 09:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:48.453 09:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:48.453 09:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:48.453 09:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.453 09:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.453 09:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.016 09:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NzQ1YjdmZDVmOTNlNTAxZTM0YTAzMGFmZDc4MTY5ZmM1N2MxNGZjMmRlMmVhZWEwIyEwPg==: --dhchap-ctrl-secret DHHC-1:03:ZTNhNmFiYTIyNmNjODY3YTdkMDYyNWMwMGIyMGE0NzdjNDM2YzJlZjE5YzA0ZDg1MWZkNzQwZTVjYTNjZjVhZBT0aQU=: 00:19:49.947 09:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.947 09:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:49.947 09:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.947 09:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.947 09:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.947 09:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:49.947 09:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:49.947 09:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:50.204 09:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:19:50.204 09:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:50.204 09:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:50.204 09:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:50.204 09:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:50.204 09:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.204 09:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.204 09:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.204 09:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.204 09:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.204 09:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.204 09:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.460 00:19:50.460 09:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:50.460 09:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:50.460 09:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.718 09:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.718 09:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.718 09:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.718 09:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.718 09:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.718 09:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:50.718 { 00:19:50.718 "cntlid": 11, 00:19:50.718 "qid": 0, 00:19:50.718 "state": "enabled", 00:19:50.718 "thread": "nvmf_tgt_poll_group_000", 00:19:50.718 "listen_address": { 00:19:50.718 "trtype": "TCP", 00:19:50.718 "adrfam": "IPv4", 00:19:50.718 "traddr": "10.0.0.2", 00:19:50.718 "trsvcid": "4420" 00:19:50.718 }, 00:19:50.718 "peer_address": { 00:19:50.718 "trtype": "TCP", 00:19:50.718 "adrfam": "IPv4", 00:19:50.718 "traddr": "10.0.0.1", 00:19:50.718 "trsvcid": "41178" 00:19:50.718 }, 00:19:50.718 "auth": { 00:19:50.718 "state": "completed", 00:19:50.718 "digest": "sha256", 00:19:50.718 "dhgroup": "ffdhe2048" 00:19:50.718 } 00:19:50.718 } 00:19:50.718 ]' 00:19:50.718 09:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:50.718 09:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:50.718 09:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:50.718 09:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:50.718 09:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:50.718 09:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.718 09:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.718 09:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.976 09:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MGZjM2VhZDFmMDg1OTVlODc3OWIwODUwZjNkOWIwYzNGi3HN: --dhchap-ctrl-secret DHHC-1:02:YzQxZmU0MGQ3ZjQ4MGNiNzVhNjcwNWU4MTcwZmZmZGQwNDIyYTcwMWE3ZjFkYTVjZh8NIw==: 00:19:51.933 09:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.191 09:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:52.191 09:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.191 09:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.191 09:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.191 09:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:52.191 09:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:52.191 09:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:52.448 09:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:19:52.448 09:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:52.448 09:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:52.448 09:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:52.448 09:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:52.448 09:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.448 09:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.448 09:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.448 09:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.448 09:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.448 09:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.448 09:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.707 00:19:52.707 09:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:52.707 09:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:52.707 09:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.965 09:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.965 09:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.965 09:29:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.965 09:29:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.965 09:29:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.965 09:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:52.965 { 00:19:52.965 "cntlid": 13, 00:19:52.965 "qid": 0, 00:19:52.965 "state": "enabled", 00:19:52.965 "thread": "nvmf_tgt_poll_group_000", 00:19:52.965 "listen_address": { 00:19:52.965 "trtype": "TCP", 00:19:52.965 "adrfam": "IPv4", 00:19:52.965 "traddr": "10.0.0.2", 00:19:52.965 "trsvcid": "4420" 00:19:52.965 }, 00:19:52.965 "peer_address": { 00:19:52.965 "trtype": "TCP", 00:19:52.965 "adrfam": "IPv4", 00:19:52.965 "traddr": "10.0.0.1", 00:19:52.965 "trsvcid": "41208" 00:19:52.965 }, 00:19:52.965 "auth": { 00:19:52.965 "state": "completed", 00:19:52.965 "digest": "sha256", 00:19:52.965 "dhgroup": "ffdhe2048" 00:19:52.965 } 00:19:52.965 } 00:19:52.965 ]' 00:19:52.965 09:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:52.965 09:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:52.965 09:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:52.965 09:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:52.965 09:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:52.965 09:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.965 09:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.965 09:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.223 09:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDE5N2YzOTY4OTNjMzgwYTM4MmQ0MzdkYWMzMTc1NWNlMzQzOGNhZWM3ZmJkMmJkcIhjtQ==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NTllNDE4M2ZkMjY5YTBhMTE4NTYwZDhkMzI0NziXBON7: 00:19:54.156 09:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.414 09:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:54.414 09:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.414 09:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.414 09:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.414 09:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:54.414 09:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:54.414 09:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:54.671 09:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:19:54.671 09:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:54.671 09:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:54.671 09:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:54.671 09:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:54.671 09:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.671 09:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:54.671 09:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.671 09:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.671 09:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.671 09:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:54.671 09:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:54.929 00:19:54.929 09:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:54.929 09:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:54.929 09:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.186 09:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.186 09:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.186 09:29:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.186 09:29:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.186 09:29:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.186 09:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:55.186 { 00:19:55.186 "cntlid": 15, 00:19:55.186 "qid": 0, 00:19:55.186 "state": "enabled", 00:19:55.186 "thread": "nvmf_tgt_poll_group_000", 00:19:55.186 "listen_address": { 00:19:55.186 "trtype": "TCP", 00:19:55.186 "adrfam": "IPv4", 00:19:55.186 "traddr": "10.0.0.2", 00:19:55.186 "trsvcid": "4420" 00:19:55.186 }, 00:19:55.186 "peer_address": { 00:19:55.186 "trtype": "TCP", 00:19:55.186 "adrfam": "IPv4", 00:19:55.186 "traddr": "10.0.0.1", 00:19:55.186 "trsvcid": "49424" 00:19:55.186 }, 00:19:55.186 "auth": { 00:19:55.186 "state": "completed", 00:19:55.186 "digest": "sha256", 00:19:55.186 "dhgroup": "ffdhe2048" 00:19:55.186 } 00:19:55.186 } 00:19:55.186 ]' 00:19:55.186 09:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:55.186 09:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:55.186 09:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:55.443 09:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:55.443 09:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:55.443 09:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.443 09:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.443 09:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.702 09:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZDc0MDkzMjBjZjhlOTY1N2VlY2E2N2E5MGYxZDFhODliMWFjOTcyMzI2MzQyZDZmYmE2ODcxYzhlZDM1OTE4NgHa9ec=: 00:19:56.636 09:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.636 09:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:56.636 09:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.636 09:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.636 09:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.636 09:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:56.636 09:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:56.636 09:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:56.636 09:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:56.894 09:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:19:56.894 09:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:56.894 09:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:56.894 09:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:56.894 09:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:56.894 09:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.894 09:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.894 09:29:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.894 09:29:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.894 09:29:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.894 09:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.894 09:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.152 00:19:57.152 09:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:57.153 09:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:57.153 09:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.411 09:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.411 09:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.411 09:29:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.411 09:29:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.411 09:29:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.411 09:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:57.411 { 00:19:57.411 "cntlid": 17, 00:19:57.411 "qid": 0, 00:19:57.411 "state": "enabled", 00:19:57.411 "thread": "nvmf_tgt_poll_group_000", 00:19:57.411 "listen_address": { 00:19:57.411 "trtype": "TCP", 00:19:57.411 "adrfam": "IPv4", 00:19:57.411 "traddr": "10.0.0.2", 00:19:57.411 "trsvcid": "4420" 00:19:57.411 }, 00:19:57.411 "peer_address": { 00:19:57.411 "trtype": "TCP", 00:19:57.411 "adrfam": "IPv4", 00:19:57.411 "traddr": "10.0.0.1", 00:19:57.411 "trsvcid": "49434" 00:19:57.411 }, 00:19:57.411 "auth": { 00:19:57.411 "state": "completed", 00:19:57.411 "digest": "sha256", 00:19:57.411 "dhgroup": "ffdhe3072" 00:19:57.411 } 00:19:57.411 } 00:19:57.411 ]' 00:19:57.411 09:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:57.668 09:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:57.668 09:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:57.668 09:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:57.668 09:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:57.668 09:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.668 09:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.668 09:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.926 09:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NzQ1YjdmZDVmOTNlNTAxZTM0YTAzMGFmZDc4MTY5ZmM1N2MxNGZjMmRlMmVhZWEwIyEwPg==: --dhchap-ctrl-secret DHHC-1:03:ZTNhNmFiYTIyNmNjODY3YTdkMDYyNWMwMGIyMGE0NzdjNDM2YzJlZjE5YzA0ZDg1MWZkNzQwZTVjYTNjZjVhZBT0aQU=: 00:19:58.869 09:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.869 09:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:58.869 09:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.869 09:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.869 09:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.869 09:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:58.869 09:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:58.869 09:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:59.188 09:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:19:59.188 09:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:59.188 09:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:59.188 09:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:59.188 09:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:59.188 09:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.188 09:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.188 09:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.188 09:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.188 09:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.188 09:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.188 09:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.752 00:19:59.752 09:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:59.752 09:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:59.752 09:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.752 09:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.752 09:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.752 09:29:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.752 09:29:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.009 09:29:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.009 09:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:00.009 { 00:20:00.009 "cntlid": 19, 00:20:00.009 "qid": 0, 00:20:00.009 "state": "enabled", 00:20:00.009 "thread": "nvmf_tgt_poll_group_000", 00:20:00.009 "listen_address": { 00:20:00.009 "trtype": "TCP", 00:20:00.009 "adrfam": "IPv4", 00:20:00.009 "traddr": "10.0.0.2", 00:20:00.009 "trsvcid": "4420" 00:20:00.009 }, 00:20:00.009 "peer_address": { 00:20:00.009 "trtype": "TCP", 00:20:00.009 "adrfam": "IPv4", 00:20:00.009 "traddr": "10.0.0.1", 00:20:00.009 "trsvcid": "49458" 00:20:00.009 }, 00:20:00.009 "auth": { 00:20:00.009 "state": "completed", 00:20:00.009 "digest": "sha256", 00:20:00.009 "dhgroup": "ffdhe3072" 00:20:00.009 } 00:20:00.009 } 00:20:00.009 ]' 00:20:00.009 09:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:00.009 09:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:00.009 09:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:00.009 09:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:00.009 09:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:00.009 09:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.009 09:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.009 09:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.267 09:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MGZjM2VhZDFmMDg1OTVlODc3OWIwODUwZjNkOWIwYzNGi3HN: --dhchap-ctrl-secret DHHC-1:02:YzQxZmU0MGQ3ZjQ4MGNiNzVhNjcwNWU4MTcwZmZmZGQwNDIyYTcwMWE3ZjFkYTVjZh8NIw==: 00:20:01.200 09:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.200 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.200 09:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:01.200 09:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.200 09:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.200 09:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.200 09:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:01.200 09:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:01.200 09:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:01.458 09:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:20:01.459 09:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:01.459 09:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:01.459 09:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:01.459 09:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:01.459 09:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.459 09:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.459 09:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.459 09:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.459 09:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.459 09:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.459 09:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.025 00:20:02.025 09:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:02.025 09:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:02.025 09:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.025 09:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.025 09:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.025 09:29:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.025 09:29:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.025 09:29:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.025 09:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:02.025 { 00:20:02.025 "cntlid": 21, 00:20:02.025 "qid": 0, 00:20:02.025 "state": "enabled", 00:20:02.025 "thread": "nvmf_tgt_poll_group_000", 00:20:02.025 "listen_address": { 00:20:02.025 "trtype": "TCP", 00:20:02.025 "adrfam": "IPv4", 00:20:02.025 "traddr": "10.0.0.2", 00:20:02.025 "trsvcid": "4420" 00:20:02.025 }, 00:20:02.025 "peer_address": { 00:20:02.025 "trtype": "TCP", 00:20:02.025 "adrfam": "IPv4", 00:20:02.025 "traddr": "10.0.0.1", 00:20:02.025 "trsvcid": "49496" 00:20:02.025 }, 00:20:02.025 "auth": { 00:20:02.025 "state": "completed", 00:20:02.025 "digest": "sha256", 00:20:02.025 "dhgroup": "ffdhe3072" 00:20:02.025 } 00:20:02.025 } 00:20:02.025 ]' 00:20:02.025 09:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:02.284 09:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:02.284 09:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:02.284 09:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:02.284 09:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:02.284 09:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.284 09:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.284 09:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.542 09:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDE5N2YzOTY4OTNjMzgwYTM4MmQ0MzdkYWMzMTc1NWNlMzQzOGNhZWM3ZmJkMmJkcIhjtQ==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NTllNDE4M2ZkMjY5YTBhMTE4NTYwZDhkMzI0NziXBON7: 00:20:03.475 09:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.475 09:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:03.475 09:29:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.475 09:29:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.475 09:29:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.475 09:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:03.475 09:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:03.475 09:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:03.733 09:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:20:03.733 09:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:03.733 09:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:03.733 09:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:03.733 09:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:03.733 09:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.733 09:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:03.733 09:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.733 09:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.733 09:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.733 09:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:03.733 09:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:03.991 00:20:03.991 09:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:03.991 09:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:03.991 09:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.249 09:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.249 09:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.249 09:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.249 09:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.249 09:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.249 09:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:04.249 { 00:20:04.249 "cntlid": 23, 00:20:04.249 "qid": 0, 00:20:04.249 "state": "enabled", 00:20:04.249 "thread": "nvmf_tgt_poll_group_000", 00:20:04.249 "listen_address": { 00:20:04.249 "trtype": "TCP", 00:20:04.249 "adrfam": "IPv4", 00:20:04.249 "traddr": "10.0.0.2", 00:20:04.249 "trsvcid": "4420" 00:20:04.249 }, 00:20:04.249 "peer_address": { 00:20:04.249 "trtype": "TCP", 00:20:04.249 "adrfam": "IPv4", 00:20:04.249 "traddr": "10.0.0.1", 00:20:04.249 "trsvcid": "54774" 00:20:04.249 }, 00:20:04.249 "auth": { 00:20:04.249 "state": "completed", 00:20:04.249 "digest": "sha256", 00:20:04.249 "dhgroup": "ffdhe3072" 00:20:04.249 } 00:20:04.249 } 00:20:04.249 ]' 00:20:04.249 09:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:04.249 09:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:04.249 09:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:04.507 09:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:04.507 09:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:04.507 09:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.507 09:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.507 09:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.765 09:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZDc0MDkzMjBjZjhlOTY1N2VlY2E2N2E5MGYxZDFhODliMWFjOTcyMzI2MzQyZDZmYmE2ODcxYzhlZDM1OTE4NgHa9ec=: 00:20:05.700 09:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.700 09:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:05.700 09:29:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.700 09:29:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.700 09:29:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.700 09:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:05.701 09:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:05.701 09:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:05.701 09:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:05.958 09:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:20:05.958 09:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:05.958 09:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:05.958 09:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:05.958 09:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:05.958 09:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.958 09:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.958 09:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.958 09:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.958 09:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.958 09:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.958 09:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.525 00:20:06.525 09:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:06.525 09:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:06.525 09:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.525 09:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.525 09:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.525 09:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.525 09:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.525 09:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.525 09:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:06.525 { 00:20:06.525 "cntlid": 25, 00:20:06.525 "qid": 0, 00:20:06.525 "state": "enabled", 00:20:06.525 "thread": "nvmf_tgt_poll_group_000", 00:20:06.525 "listen_address": { 00:20:06.525 "trtype": "TCP", 00:20:06.525 "adrfam": "IPv4", 00:20:06.525 "traddr": "10.0.0.2", 00:20:06.525 "trsvcid": "4420" 00:20:06.525 }, 00:20:06.525 "peer_address": { 00:20:06.525 "trtype": "TCP", 00:20:06.525 "adrfam": "IPv4", 00:20:06.525 "traddr": "10.0.0.1", 00:20:06.525 "trsvcid": "54802" 00:20:06.525 }, 00:20:06.525 "auth": { 00:20:06.525 "state": "completed", 00:20:06.525 "digest": "sha256", 00:20:06.525 "dhgroup": "ffdhe4096" 00:20:06.525 } 00:20:06.525 } 00:20:06.525 ]' 00:20:06.525 09:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:06.783 09:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:06.783 09:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:06.783 09:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:06.783 09:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:06.783 09:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.783 09:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.783 09:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.040 09:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NzQ1YjdmZDVmOTNlNTAxZTM0YTAzMGFmZDc4MTY5ZmM1N2MxNGZjMmRlMmVhZWEwIyEwPg==: --dhchap-ctrl-secret DHHC-1:03:ZTNhNmFiYTIyNmNjODY3YTdkMDYyNWMwMGIyMGE0NzdjNDM2YzJlZjE5YzA0ZDg1MWZkNzQwZTVjYTNjZjVhZBT0aQU=: 00:20:07.972 09:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.972 09:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:07.972 09:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.972 09:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.972 09:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.972 09:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:07.972 09:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:07.972 09:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:08.230 09:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:20:08.230 09:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:08.230 09:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:08.230 09:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:08.230 09:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:08.230 09:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.230 09:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.230 09:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.230 09:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.230 09:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.230 09:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.230 09:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.795 00:20:08.795 09:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:08.795 09:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:08.795 09:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.053 09:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.053 09:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.053 09:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.053 09:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.053 09:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.053 09:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:09.053 { 00:20:09.053 "cntlid": 27, 00:20:09.053 "qid": 0, 00:20:09.053 "state": "enabled", 00:20:09.053 "thread": "nvmf_tgt_poll_group_000", 00:20:09.053 "listen_address": { 00:20:09.053 "trtype": "TCP", 00:20:09.053 "adrfam": "IPv4", 00:20:09.053 "traddr": "10.0.0.2", 00:20:09.053 "trsvcid": "4420" 00:20:09.053 }, 00:20:09.053 "peer_address": { 00:20:09.053 "trtype": "TCP", 00:20:09.053 "adrfam": "IPv4", 00:20:09.053 "traddr": "10.0.0.1", 00:20:09.053 "trsvcid": "54816" 00:20:09.053 }, 00:20:09.053 "auth": { 00:20:09.053 "state": "completed", 00:20:09.053 "digest": "sha256", 00:20:09.053 "dhgroup": "ffdhe4096" 00:20:09.053 } 00:20:09.053 } 00:20:09.053 ]' 00:20:09.053 09:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:09.053 09:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:09.053 09:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:09.053 09:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:09.053 09:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:09.053 09:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.053 09:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.053 09:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.311 09:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MGZjM2VhZDFmMDg1OTVlODc3OWIwODUwZjNkOWIwYzNGi3HN: --dhchap-ctrl-secret DHHC-1:02:YzQxZmU0MGQ3ZjQ4MGNiNzVhNjcwNWU4MTcwZmZmZGQwNDIyYTcwMWE3ZjFkYTVjZh8NIw==: 00:20:10.242 09:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.242 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.242 09:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:10.242 09:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.242 09:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.242 09:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.242 09:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:10.242 09:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:10.242 09:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:10.499 09:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:20:10.499 09:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:10.499 09:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:10.499 09:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:10.499 09:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:10.499 09:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.499 09:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.499 09:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.499 09:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.499 09:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.499 09:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.499 09:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:11.065 00:20:11.065 09:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:11.065 09:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.065 09:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:11.065 09:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.065 09:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.065 09:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.065 09:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.065 09:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.065 09:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:11.065 { 00:20:11.065 "cntlid": 29, 00:20:11.065 "qid": 0, 00:20:11.065 "state": "enabled", 00:20:11.065 "thread": "nvmf_tgt_poll_group_000", 00:20:11.065 "listen_address": { 00:20:11.065 "trtype": "TCP", 00:20:11.065 "adrfam": "IPv4", 00:20:11.065 "traddr": "10.0.0.2", 00:20:11.065 "trsvcid": "4420" 00:20:11.065 }, 00:20:11.065 "peer_address": { 00:20:11.065 "trtype": "TCP", 00:20:11.065 "adrfam": "IPv4", 00:20:11.065 "traddr": "10.0.0.1", 00:20:11.065 "trsvcid": "54830" 00:20:11.065 }, 00:20:11.065 "auth": { 00:20:11.065 "state": "completed", 00:20:11.065 "digest": "sha256", 00:20:11.065 "dhgroup": "ffdhe4096" 00:20:11.065 } 00:20:11.065 } 00:20:11.065 ]' 00:20:11.065 09:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:11.323 09:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:11.323 09:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:11.323 09:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:11.323 09:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:11.323 09:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.323 09:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.323 09:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.595 09:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDE5N2YzOTY4OTNjMzgwYTM4MmQ0MzdkYWMzMTc1NWNlMzQzOGNhZWM3ZmJkMmJkcIhjtQ==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NTllNDE4M2ZkMjY5YTBhMTE4NTYwZDhkMzI0NziXBON7: 00:20:12.527 09:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.527 09:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:12.527 09:29:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.527 09:29:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.527 09:29:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.527 09:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:12.527 09:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:12.527 09:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:12.785 09:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:20:12.785 09:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:12.785 09:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:12.785 09:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:12.785 09:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:12.785 09:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.785 09:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:12.785 09:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.785 09:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.785 09:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.785 09:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:12.785 09:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:13.042 00:20:13.300 09:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:13.300 09:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:13.300 09:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.300 09:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.300 09:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.300 09:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.300 09:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.558 09:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.558 09:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:13.558 { 00:20:13.558 "cntlid": 31, 00:20:13.558 "qid": 0, 00:20:13.558 "state": "enabled", 00:20:13.558 "thread": "nvmf_tgt_poll_group_000", 00:20:13.558 "listen_address": { 00:20:13.558 "trtype": "TCP", 00:20:13.558 "adrfam": "IPv4", 00:20:13.558 "traddr": "10.0.0.2", 00:20:13.558 "trsvcid": "4420" 00:20:13.558 }, 00:20:13.558 "peer_address": { 00:20:13.558 "trtype": "TCP", 00:20:13.558 "adrfam": "IPv4", 00:20:13.558 "traddr": "10.0.0.1", 00:20:13.558 "trsvcid": "54858" 00:20:13.558 }, 00:20:13.558 "auth": { 00:20:13.558 "state": "completed", 00:20:13.558 "digest": "sha256", 00:20:13.558 "dhgroup": "ffdhe4096" 00:20:13.558 } 00:20:13.558 } 00:20:13.558 ]' 00:20:13.558 09:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:13.558 09:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:13.558 09:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:13.558 09:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:13.558 09:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:13.558 09:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.558 09:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.558 09:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.816 09:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZDc0MDkzMjBjZjhlOTY1N2VlY2E2N2E5MGYxZDFhODliMWFjOTcyMzI2MzQyZDZmYmE2ODcxYzhlZDM1OTE4NgHa9ec=: 00:20:14.748 09:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.748 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.748 09:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:14.748 09:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.748 09:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.748 09:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.748 09:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:14.748 09:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:14.748 09:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:14.748 09:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:15.006 09:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:20:15.006 09:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:15.006 09:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:15.006 09:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:15.007 09:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:15.007 09:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.007 09:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.007 09:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.007 09:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.007 09:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.007 09:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.007 09:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.573 00:20:15.573 09:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:15.573 09:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.573 09:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:15.832 09:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.832 09:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.832 09:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.832 09:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.832 09:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.832 09:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:15.832 { 00:20:15.832 "cntlid": 33, 00:20:15.832 "qid": 0, 00:20:15.832 "state": "enabled", 00:20:15.832 "thread": "nvmf_tgt_poll_group_000", 00:20:15.832 "listen_address": { 00:20:15.832 "trtype": "TCP", 00:20:15.832 "adrfam": "IPv4", 00:20:15.832 "traddr": "10.0.0.2", 00:20:15.832 "trsvcid": "4420" 00:20:15.832 }, 00:20:15.832 "peer_address": { 00:20:15.832 "trtype": "TCP", 00:20:15.832 "adrfam": "IPv4", 00:20:15.832 "traddr": "10.0.0.1", 00:20:15.832 "trsvcid": "52682" 00:20:15.832 }, 00:20:15.832 "auth": { 00:20:15.832 "state": "completed", 00:20:15.832 "digest": "sha256", 00:20:15.832 "dhgroup": "ffdhe6144" 00:20:15.832 } 00:20:15.832 } 00:20:15.832 ]' 00:20:15.832 09:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:15.832 09:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:15.832 09:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:16.091 09:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:16.091 09:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:16.091 09:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.091 09:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.091 09:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.349 09:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NzQ1YjdmZDVmOTNlNTAxZTM0YTAzMGFmZDc4MTY5ZmM1N2MxNGZjMmRlMmVhZWEwIyEwPg==: --dhchap-ctrl-secret DHHC-1:03:ZTNhNmFiYTIyNmNjODY3YTdkMDYyNWMwMGIyMGE0NzdjNDM2YzJlZjE5YzA0ZDg1MWZkNzQwZTVjYTNjZjVhZBT0aQU=: 00:20:17.282 09:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.282 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.282 09:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:17.282 09:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.282 09:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.282 09:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.282 09:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:17.282 09:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:17.282 09:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:17.540 09:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:20:17.540 09:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:17.540 09:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:17.540 09:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:17.540 09:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:17.540 09:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.540 09:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.540 09:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.540 09:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.540 09:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.540 09:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.540 09:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.106 00:20:18.106 09:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:18.106 09:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:18.106 09:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.364 09:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.364 09:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.364 09:30:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.364 09:30:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.364 09:30:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.364 09:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:18.364 { 00:20:18.364 "cntlid": 35, 00:20:18.364 "qid": 0, 00:20:18.364 "state": "enabled", 00:20:18.364 "thread": "nvmf_tgt_poll_group_000", 00:20:18.364 "listen_address": { 00:20:18.364 "trtype": "TCP", 00:20:18.364 "adrfam": "IPv4", 00:20:18.364 "traddr": "10.0.0.2", 00:20:18.364 "trsvcid": "4420" 00:20:18.364 }, 00:20:18.364 "peer_address": { 00:20:18.364 "trtype": "TCP", 00:20:18.364 "adrfam": "IPv4", 00:20:18.364 "traddr": "10.0.0.1", 00:20:18.364 "trsvcid": "52702" 00:20:18.364 }, 00:20:18.364 "auth": { 00:20:18.364 "state": "completed", 00:20:18.364 "digest": "sha256", 00:20:18.364 "dhgroup": "ffdhe6144" 00:20:18.364 } 00:20:18.364 } 00:20:18.364 ]' 00:20:18.364 09:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:18.364 09:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:18.364 09:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:18.364 09:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:18.364 09:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:18.364 09:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.364 09:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.364 09:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.622 09:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MGZjM2VhZDFmMDg1OTVlODc3OWIwODUwZjNkOWIwYzNGi3HN: --dhchap-ctrl-secret DHHC-1:02:YzQxZmU0MGQ3ZjQ4MGNiNzVhNjcwNWU4MTcwZmZmZGQwNDIyYTcwMWE3ZjFkYTVjZh8NIw==: 00:20:19.553 09:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.553 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.553 09:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:19.553 09:30:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.553 09:30:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.811 09:30:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.811 09:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:19.811 09:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:19.811 09:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:20.069 09:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:20:20.069 09:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:20.069 09:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:20.069 09:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:20.069 09:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:20.069 09:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.069 09:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.069 09:30:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.069 09:30:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.069 09:30:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.069 09:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.069 09:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.635 00:20:20.635 09:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:20.635 09:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.635 09:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:20.635 09:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.635 09:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.635 09:30:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.635 09:30:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.635 09:30:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.635 09:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:20.635 { 00:20:20.635 "cntlid": 37, 00:20:20.635 "qid": 0, 00:20:20.635 "state": "enabled", 00:20:20.635 "thread": "nvmf_tgt_poll_group_000", 00:20:20.635 "listen_address": { 00:20:20.635 "trtype": "TCP", 00:20:20.635 "adrfam": "IPv4", 00:20:20.635 "traddr": "10.0.0.2", 00:20:20.635 "trsvcid": "4420" 00:20:20.635 }, 00:20:20.635 "peer_address": { 00:20:20.635 "trtype": "TCP", 00:20:20.635 "adrfam": "IPv4", 00:20:20.635 "traddr": "10.0.0.1", 00:20:20.635 "trsvcid": "52740" 00:20:20.635 }, 00:20:20.635 "auth": { 00:20:20.635 "state": "completed", 00:20:20.635 "digest": "sha256", 00:20:20.635 "dhgroup": "ffdhe6144" 00:20:20.635 } 00:20:20.635 } 00:20:20.635 ]' 00:20:20.635 09:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:20.893 09:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:20.893 09:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:20.893 09:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:20.893 09:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:20.893 09:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.893 09:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.893 09:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.151 09:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDE5N2YzOTY4OTNjMzgwYTM4MmQ0MzdkYWMzMTc1NWNlMzQzOGNhZWM3ZmJkMmJkcIhjtQ==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NTllNDE4M2ZkMjY5YTBhMTE4NTYwZDhkMzI0NziXBON7: 00:20:22.085 09:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.085 09:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:22.085 09:30:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.085 09:30:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.085 09:30:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.085 09:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:22.085 09:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:22.085 09:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:22.343 09:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:20:22.343 09:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:22.343 09:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:22.343 09:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:22.343 09:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:22.343 09:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.343 09:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:22.343 09:30:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.343 09:30:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.343 09:30:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.343 09:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:22.343 09:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:22.908 00:20:22.908 09:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:22.908 09:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:22.908 09:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.166 09:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.166 09:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.166 09:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.166 09:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.166 09:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.166 09:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:23.166 { 00:20:23.166 "cntlid": 39, 00:20:23.166 "qid": 0, 00:20:23.166 "state": "enabled", 00:20:23.166 "thread": "nvmf_tgt_poll_group_000", 00:20:23.166 "listen_address": { 00:20:23.166 "trtype": "TCP", 00:20:23.166 "adrfam": "IPv4", 00:20:23.166 "traddr": "10.0.0.2", 00:20:23.166 "trsvcid": "4420" 00:20:23.166 }, 00:20:23.166 "peer_address": { 00:20:23.166 "trtype": "TCP", 00:20:23.166 "adrfam": "IPv4", 00:20:23.166 "traddr": "10.0.0.1", 00:20:23.166 "trsvcid": "52768" 00:20:23.166 }, 00:20:23.166 "auth": { 00:20:23.166 "state": "completed", 00:20:23.166 "digest": "sha256", 00:20:23.166 "dhgroup": "ffdhe6144" 00:20:23.166 } 00:20:23.166 } 00:20:23.166 ]' 00:20:23.166 09:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:23.166 09:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:23.166 09:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:23.423 09:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:23.423 09:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:23.423 09:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.423 09:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.423 09:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.680 09:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZDc0MDkzMjBjZjhlOTY1N2VlY2E2N2E5MGYxZDFhODliMWFjOTcyMzI2MzQyZDZmYmE2ODcxYzhlZDM1OTE4NgHa9ec=: 00:20:24.666 09:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.666 09:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:24.666 09:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.667 09:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.667 09:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.667 09:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:24.667 09:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:24.667 09:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:24.667 09:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:24.924 09:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:20:24.924 09:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:24.924 09:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:24.924 09:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:24.924 09:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:24.924 09:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.925 09:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.925 09:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.925 09:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.925 09:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.925 09:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.925 09:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.858 00:20:25.858 09:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:25.858 09:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:25.858 09:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.115 09:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.115 09:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.115 09:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.115 09:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.115 09:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.115 09:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:26.115 { 00:20:26.115 "cntlid": 41, 00:20:26.115 "qid": 0, 00:20:26.115 "state": "enabled", 00:20:26.115 "thread": "nvmf_tgt_poll_group_000", 00:20:26.115 "listen_address": { 00:20:26.115 "trtype": "TCP", 00:20:26.115 "adrfam": "IPv4", 00:20:26.115 "traddr": "10.0.0.2", 00:20:26.115 "trsvcid": "4420" 00:20:26.115 }, 00:20:26.115 "peer_address": { 00:20:26.115 "trtype": "TCP", 00:20:26.115 "adrfam": "IPv4", 00:20:26.115 "traddr": "10.0.0.1", 00:20:26.115 "trsvcid": "56954" 00:20:26.115 }, 00:20:26.115 "auth": { 00:20:26.115 "state": "completed", 00:20:26.115 "digest": "sha256", 00:20:26.115 "dhgroup": "ffdhe8192" 00:20:26.115 } 00:20:26.115 } 00:20:26.115 ]' 00:20:26.115 09:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:26.115 09:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:26.115 09:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:26.115 09:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:26.115 09:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:26.115 09:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.115 09:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.115 09:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.372 09:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NzQ1YjdmZDVmOTNlNTAxZTM0YTAzMGFmZDc4MTY5ZmM1N2MxNGZjMmRlMmVhZWEwIyEwPg==: --dhchap-ctrl-secret DHHC-1:03:ZTNhNmFiYTIyNmNjODY3YTdkMDYyNWMwMGIyMGE0NzdjNDM2YzJlZjE5YzA0ZDg1MWZkNzQwZTVjYTNjZjVhZBT0aQU=: 00:20:27.304 09:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.304 09:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:27.304 09:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.304 09:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.304 09:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.304 09:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:27.304 09:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:27.304 09:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:27.562 09:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:20:27.562 09:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:27.562 09:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:27.562 09:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:27.562 09:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:27.562 09:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.562 09:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.562 09:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.562 09:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.562 09:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.562 09:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.562 09:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.493 00:20:28.493 09:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:28.493 09:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:28.493 09:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.751 09:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.751 09:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.751 09:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.751 09:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.751 09:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.751 09:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:28.751 { 00:20:28.751 "cntlid": 43, 00:20:28.751 "qid": 0, 00:20:28.751 "state": "enabled", 00:20:28.751 "thread": "nvmf_tgt_poll_group_000", 00:20:28.751 "listen_address": { 00:20:28.751 "trtype": "TCP", 00:20:28.751 "adrfam": "IPv4", 00:20:28.751 "traddr": "10.0.0.2", 00:20:28.751 "trsvcid": "4420" 00:20:28.751 }, 00:20:28.751 "peer_address": { 00:20:28.751 "trtype": "TCP", 00:20:28.751 "adrfam": "IPv4", 00:20:28.751 "traddr": "10.0.0.1", 00:20:28.751 "trsvcid": "56982" 00:20:28.751 }, 00:20:28.751 "auth": { 00:20:28.751 "state": "completed", 00:20:28.751 "digest": "sha256", 00:20:28.751 "dhgroup": "ffdhe8192" 00:20:28.751 } 00:20:28.751 } 00:20:28.751 ]' 00:20:28.751 09:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:28.751 09:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:28.751 09:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:28.751 09:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:28.751 09:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:28.751 09:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.751 09:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.751 09:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.009 09:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MGZjM2VhZDFmMDg1OTVlODc3OWIwODUwZjNkOWIwYzNGi3HN: --dhchap-ctrl-secret DHHC-1:02:YzQxZmU0MGQ3ZjQ4MGNiNzVhNjcwNWU4MTcwZmZmZGQwNDIyYTcwMWE3ZjFkYTVjZh8NIw==: 00:20:30.380 09:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.380 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.380 09:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:30.380 09:30:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.380 09:30:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.380 09:30:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.380 09:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:30.380 09:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:30.380 09:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:30.380 09:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:20:30.380 09:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:30.380 09:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:30.380 09:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:30.380 09:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:30.380 09:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.380 09:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.380 09:30:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.380 09:30:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.380 09:30:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.380 09:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.380 09:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.314 00:20:31.314 09:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:31.314 09:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:31.314 09:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.572 09:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.572 09:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.572 09:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.572 09:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.572 09:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.572 09:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:31.572 { 00:20:31.572 "cntlid": 45, 00:20:31.572 "qid": 0, 00:20:31.572 "state": "enabled", 00:20:31.572 "thread": "nvmf_tgt_poll_group_000", 00:20:31.572 "listen_address": { 00:20:31.572 "trtype": "TCP", 00:20:31.572 "adrfam": "IPv4", 00:20:31.572 "traddr": "10.0.0.2", 00:20:31.572 "trsvcid": "4420" 00:20:31.572 }, 00:20:31.572 "peer_address": { 00:20:31.572 "trtype": "TCP", 00:20:31.572 "adrfam": "IPv4", 00:20:31.572 "traddr": "10.0.0.1", 00:20:31.572 "trsvcid": "57008" 00:20:31.572 }, 00:20:31.572 "auth": { 00:20:31.572 "state": "completed", 00:20:31.572 "digest": "sha256", 00:20:31.572 "dhgroup": "ffdhe8192" 00:20:31.572 } 00:20:31.572 } 00:20:31.572 ]' 00:20:31.572 09:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:31.572 09:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:31.572 09:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:31.572 09:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:31.572 09:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:31.572 09:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.572 09:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.572 09:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.830 09:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDE5N2YzOTY4OTNjMzgwYTM4MmQ0MzdkYWMzMTc1NWNlMzQzOGNhZWM3ZmJkMmJkcIhjtQ==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NTllNDE4M2ZkMjY5YTBhMTE4NTYwZDhkMzI0NziXBON7: 00:20:33.202 09:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.202 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.202 09:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:33.202 09:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.202 09:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.202 09:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.202 09:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:33.202 09:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:33.202 09:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:33.202 09:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:20:33.202 09:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:33.202 09:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:33.202 09:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:33.202 09:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:33.202 09:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.202 09:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:33.202 09:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.202 09:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.202 09:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.202 09:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:33.202 09:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:34.131 00:20:34.132 09:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:34.132 09:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:34.132 09:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.389 09:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.389 09:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.389 09:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.389 09:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.389 09:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.389 09:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:34.389 { 00:20:34.389 "cntlid": 47, 00:20:34.389 "qid": 0, 00:20:34.389 "state": "enabled", 00:20:34.389 "thread": "nvmf_tgt_poll_group_000", 00:20:34.389 "listen_address": { 00:20:34.389 "trtype": "TCP", 00:20:34.389 "adrfam": "IPv4", 00:20:34.389 "traddr": "10.0.0.2", 00:20:34.389 "trsvcid": "4420" 00:20:34.389 }, 00:20:34.389 "peer_address": { 00:20:34.389 "trtype": "TCP", 00:20:34.389 "adrfam": "IPv4", 00:20:34.389 "traddr": "10.0.0.1", 00:20:34.389 "trsvcid": "34708" 00:20:34.389 }, 00:20:34.389 "auth": { 00:20:34.389 "state": "completed", 00:20:34.389 "digest": "sha256", 00:20:34.389 "dhgroup": "ffdhe8192" 00:20:34.389 } 00:20:34.389 } 00:20:34.389 ]' 00:20:34.389 09:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:34.389 09:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:34.389 09:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:34.389 09:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:34.389 09:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:34.389 09:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.389 09:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.389 09:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.645 09:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZDc0MDkzMjBjZjhlOTY1N2VlY2E2N2E5MGYxZDFhODliMWFjOTcyMzI2MzQyZDZmYmE2ODcxYzhlZDM1OTE4NgHa9ec=: 00:20:35.574 09:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.574 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.574 09:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:35.574 09:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.574 09:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.574 09:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.574 09:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:35.574 09:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:35.574 09:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:35.574 09:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:35.574 09:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:36.138 09:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:20:36.138 09:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:36.138 09:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:36.138 09:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:36.138 09:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:36.138 09:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.138 09:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.138 09:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.138 09:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.138 09:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.138 09:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.138 09:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.395 00:20:36.395 09:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:36.395 09:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:36.395 09:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.651 09:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.651 09:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.651 09:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.651 09:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.651 09:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.651 09:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:36.651 { 00:20:36.651 "cntlid": 49, 00:20:36.651 "qid": 0, 00:20:36.651 "state": "enabled", 00:20:36.651 "thread": "nvmf_tgt_poll_group_000", 00:20:36.651 "listen_address": { 00:20:36.651 "trtype": "TCP", 00:20:36.651 "adrfam": "IPv4", 00:20:36.651 "traddr": "10.0.0.2", 00:20:36.651 "trsvcid": "4420" 00:20:36.651 }, 00:20:36.651 "peer_address": { 00:20:36.651 "trtype": "TCP", 00:20:36.651 "adrfam": "IPv4", 00:20:36.651 "traddr": "10.0.0.1", 00:20:36.651 "trsvcid": "34716" 00:20:36.651 }, 00:20:36.651 "auth": { 00:20:36.651 "state": "completed", 00:20:36.651 "digest": "sha384", 00:20:36.651 "dhgroup": "null" 00:20:36.651 } 00:20:36.651 } 00:20:36.651 ]' 00:20:36.651 09:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:36.651 09:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:36.651 09:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:36.651 09:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:36.651 09:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:36.651 09:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.651 09:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.651 09:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.908 09:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NzQ1YjdmZDVmOTNlNTAxZTM0YTAzMGFmZDc4MTY5ZmM1N2MxNGZjMmRlMmVhZWEwIyEwPg==: --dhchap-ctrl-secret DHHC-1:03:ZTNhNmFiYTIyNmNjODY3YTdkMDYyNWMwMGIyMGE0NzdjNDM2YzJlZjE5YzA0ZDg1MWZkNzQwZTVjYTNjZjVhZBT0aQU=: 00:20:37.864 09:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.864 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.864 09:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:37.864 09:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.864 09:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.864 09:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.864 09:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:37.864 09:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:37.864 09:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:38.122 09:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:20:38.122 09:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:38.122 09:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:38.122 09:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:38.122 09:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:38.122 09:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.122 09:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.122 09:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.122 09:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.122 09:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.122 09:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.122 09:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.688 00:20:38.688 09:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:38.688 09:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:38.688 09:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.961 09:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.961 09:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.961 09:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.961 09:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.961 09:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.961 09:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:38.961 { 00:20:38.961 "cntlid": 51, 00:20:38.961 "qid": 0, 00:20:38.961 "state": "enabled", 00:20:38.961 "thread": "nvmf_tgt_poll_group_000", 00:20:38.961 "listen_address": { 00:20:38.961 "trtype": "TCP", 00:20:38.961 "adrfam": "IPv4", 00:20:38.961 "traddr": "10.0.0.2", 00:20:38.961 "trsvcid": "4420" 00:20:38.961 }, 00:20:38.961 "peer_address": { 00:20:38.961 "trtype": "TCP", 00:20:38.961 "adrfam": "IPv4", 00:20:38.961 "traddr": "10.0.0.1", 00:20:38.961 "trsvcid": "34748" 00:20:38.961 }, 00:20:38.961 "auth": { 00:20:38.961 "state": "completed", 00:20:38.961 "digest": "sha384", 00:20:38.961 "dhgroup": "null" 00:20:38.961 } 00:20:38.961 } 00:20:38.961 ]' 00:20:38.961 09:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:38.961 09:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:38.961 09:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:38.961 09:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:38.961 09:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:38.961 09:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.961 09:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.961 09:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.225 09:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MGZjM2VhZDFmMDg1OTVlODc3OWIwODUwZjNkOWIwYzNGi3HN: --dhchap-ctrl-secret DHHC-1:02:YzQxZmU0MGQ3ZjQ4MGNiNzVhNjcwNWU4MTcwZmZmZGQwNDIyYTcwMWE3ZjFkYTVjZh8NIw==: 00:20:40.157 09:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.157 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.157 09:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:40.157 09:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.157 09:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.157 09:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.157 09:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:40.157 09:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:40.157 09:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:40.415 09:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:20:40.415 09:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:40.415 09:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:40.415 09:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:40.415 09:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:40.415 09:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.415 09:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.415 09:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.415 09:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.416 09:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.416 09:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.416 09:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.674 00:20:40.674 09:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:40.674 09:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.674 09:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:40.932 09:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.932 09:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.932 09:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.932 09:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.932 09:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.932 09:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:40.932 { 00:20:40.932 "cntlid": 53, 00:20:40.932 "qid": 0, 00:20:40.932 "state": "enabled", 00:20:40.932 "thread": "nvmf_tgt_poll_group_000", 00:20:40.932 "listen_address": { 00:20:40.932 "trtype": "TCP", 00:20:40.932 "adrfam": "IPv4", 00:20:40.932 "traddr": "10.0.0.2", 00:20:40.932 "trsvcid": "4420" 00:20:40.932 }, 00:20:40.932 "peer_address": { 00:20:40.932 "trtype": "TCP", 00:20:40.932 "adrfam": "IPv4", 00:20:40.932 "traddr": "10.0.0.1", 00:20:40.932 "trsvcid": "34762" 00:20:40.932 }, 00:20:40.932 "auth": { 00:20:40.932 "state": "completed", 00:20:40.932 "digest": "sha384", 00:20:40.932 "dhgroup": "null" 00:20:40.932 } 00:20:40.932 } 00:20:40.932 ]' 00:20:40.932 09:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:41.190 09:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:41.190 09:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:41.190 09:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:41.190 09:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:41.190 09:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.190 09:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.190 09:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.447 09:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDE5N2YzOTY4OTNjMzgwYTM4MmQ0MzdkYWMzMTc1NWNlMzQzOGNhZWM3ZmJkMmJkcIhjtQ==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NTllNDE4M2ZkMjY5YTBhMTE4NTYwZDhkMzI0NziXBON7: 00:20:42.381 09:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.381 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.381 09:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:42.381 09:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.381 09:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.381 09:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.381 09:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:42.381 09:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:42.381 09:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:42.639 09:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:20:42.639 09:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:42.639 09:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:42.639 09:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:42.639 09:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:42.639 09:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.639 09:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:42.639 09:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.639 09:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.639 09:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.639 09:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:42.639 09:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:42.897 00:20:42.897 09:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:42.897 09:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:42.897 09:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.155 09:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.155 09:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.155 09:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.155 09:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.155 09:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.155 09:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:43.155 { 00:20:43.155 "cntlid": 55, 00:20:43.155 "qid": 0, 00:20:43.155 "state": "enabled", 00:20:43.155 "thread": "nvmf_tgt_poll_group_000", 00:20:43.155 "listen_address": { 00:20:43.155 "trtype": "TCP", 00:20:43.155 "adrfam": "IPv4", 00:20:43.155 "traddr": "10.0.0.2", 00:20:43.155 "trsvcid": "4420" 00:20:43.155 }, 00:20:43.155 "peer_address": { 00:20:43.155 "trtype": "TCP", 00:20:43.155 "adrfam": "IPv4", 00:20:43.155 "traddr": "10.0.0.1", 00:20:43.155 "trsvcid": "34790" 00:20:43.155 }, 00:20:43.155 "auth": { 00:20:43.155 "state": "completed", 00:20:43.155 "digest": "sha384", 00:20:43.155 "dhgroup": "null" 00:20:43.155 } 00:20:43.155 } 00:20:43.155 ]' 00:20:43.155 09:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:43.155 09:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:43.155 09:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:43.414 09:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:43.414 09:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:43.414 09:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.414 09:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.414 09:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.672 09:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZDc0MDkzMjBjZjhlOTY1N2VlY2E2N2E5MGYxZDFhODliMWFjOTcyMzI2MzQyZDZmYmE2ODcxYzhlZDM1OTE4NgHa9ec=: 00:20:44.606 09:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.606 09:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:44.606 09:30:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.606 09:30:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.606 09:30:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.606 09:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:44.606 09:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:44.606 09:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:44.606 09:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:44.864 09:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:20:44.864 09:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:44.864 09:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:44.864 09:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:44.864 09:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:44.864 09:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.864 09:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.864 09:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.864 09:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.864 09:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.864 09:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.864 09:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.122 00:20:45.122 09:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:45.122 09:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:45.122 09:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.380 09:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.380 09:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.380 09:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.380 09:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.380 09:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.380 09:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:45.380 { 00:20:45.380 "cntlid": 57, 00:20:45.380 "qid": 0, 00:20:45.380 "state": "enabled", 00:20:45.380 "thread": "nvmf_tgt_poll_group_000", 00:20:45.380 "listen_address": { 00:20:45.380 "trtype": "TCP", 00:20:45.380 "adrfam": "IPv4", 00:20:45.380 "traddr": "10.0.0.2", 00:20:45.380 "trsvcid": "4420" 00:20:45.380 }, 00:20:45.380 "peer_address": { 00:20:45.380 "trtype": "TCP", 00:20:45.380 "adrfam": "IPv4", 00:20:45.380 "traddr": "10.0.0.1", 00:20:45.380 "trsvcid": "32956" 00:20:45.380 }, 00:20:45.380 "auth": { 00:20:45.380 "state": "completed", 00:20:45.380 "digest": "sha384", 00:20:45.380 "dhgroup": "ffdhe2048" 00:20:45.380 } 00:20:45.380 } 00:20:45.380 ]' 00:20:45.380 09:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:45.380 09:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:45.380 09:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:45.380 09:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:45.380 09:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:45.638 09:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.638 09:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.638 09:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.896 09:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NzQ1YjdmZDVmOTNlNTAxZTM0YTAzMGFmZDc4MTY5ZmM1N2MxNGZjMmRlMmVhZWEwIyEwPg==: --dhchap-ctrl-secret DHHC-1:03:ZTNhNmFiYTIyNmNjODY3YTdkMDYyNWMwMGIyMGE0NzdjNDM2YzJlZjE5YzA0ZDg1MWZkNzQwZTVjYTNjZjVhZBT0aQU=: 00:20:46.829 09:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.829 09:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:46.829 09:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.829 09:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.829 09:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.829 09:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:46.829 09:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:46.829 09:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:47.087 09:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:20:47.087 09:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:47.087 09:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:47.087 09:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:47.087 09:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:47.087 09:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.087 09:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.087 09:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.087 09:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.087 09:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.087 09:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.087 09:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.345 00:20:47.345 09:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:47.345 09:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.345 09:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:47.603 09:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.603 09:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.603 09:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.603 09:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.603 09:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.603 09:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:47.603 { 00:20:47.603 "cntlid": 59, 00:20:47.603 "qid": 0, 00:20:47.603 "state": "enabled", 00:20:47.603 "thread": "nvmf_tgt_poll_group_000", 00:20:47.603 "listen_address": { 00:20:47.603 "trtype": "TCP", 00:20:47.603 "adrfam": "IPv4", 00:20:47.603 "traddr": "10.0.0.2", 00:20:47.603 "trsvcid": "4420" 00:20:47.603 }, 00:20:47.603 "peer_address": { 00:20:47.603 "trtype": "TCP", 00:20:47.603 "adrfam": "IPv4", 00:20:47.603 "traddr": "10.0.0.1", 00:20:47.603 "trsvcid": "32978" 00:20:47.603 }, 00:20:47.603 "auth": { 00:20:47.603 "state": "completed", 00:20:47.603 "digest": "sha384", 00:20:47.603 "dhgroup": "ffdhe2048" 00:20:47.603 } 00:20:47.603 } 00:20:47.603 ]' 00:20:47.603 09:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:47.603 09:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:47.603 09:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:47.603 09:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:47.603 09:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:47.603 09:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.603 09:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.603 09:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.861 09:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MGZjM2VhZDFmMDg1OTVlODc3OWIwODUwZjNkOWIwYzNGi3HN: --dhchap-ctrl-secret DHHC-1:02:YzQxZmU0MGQ3ZjQ4MGNiNzVhNjcwNWU4MTcwZmZmZGQwNDIyYTcwMWE3ZjFkYTVjZh8NIw==: 00:20:48.793 09:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.050 09:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:49.050 09:30:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.051 09:30:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.051 09:30:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.051 09:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:49.051 09:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:49.051 09:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:49.308 09:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:20:49.308 09:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:49.308 09:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:49.308 09:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:49.308 09:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:49.308 09:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.308 09:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.308 09:30:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.308 09:30:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.308 09:30:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.308 09:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.308 09:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.565 00:20:49.565 09:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:49.565 09:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:49.565 09:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.822 09:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.822 09:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.822 09:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.822 09:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.822 09:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.822 09:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:49.822 { 00:20:49.822 "cntlid": 61, 00:20:49.822 "qid": 0, 00:20:49.822 "state": "enabled", 00:20:49.822 "thread": "nvmf_tgt_poll_group_000", 00:20:49.822 "listen_address": { 00:20:49.822 "trtype": "TCP", 00:20:49.822 "adrfam": "IPv4", 00:20:49.822 "traddr": "10.0.0.2", 00:20:49.822 "trsvcid": "4420" 00:20:49.822 }, 00:20:49.822 "peer_address": { 00:20:49.822 "trtype": "TCP", 00:20:49.822 "adrfam": "IPv4", 00:20:49.822 "traddr": "10.0.0.1", 00:20:49.822 "trsvcid": "33000" 00:20:49.822 }, 00:20:49.822 "auth": { 00:20:49.822 "state": "completed", 00:20:49.822 "digest": "sha384", 00:20:49.822 "dhgroup": "ffdhe2048" 00:20:49.822 } 00:20:49.822 } 00:20:49.822 ]' 00:20:49.822 09:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:49.822 09:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:49.822 09:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:49.822 09:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:49.822 09:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:50.084 09:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.084 09:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.084 09:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.084 09:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDE5N2YzOTY4OTNjMzgwYTM4MmQ0MzdkYWMzMTc1NWNlMzQzOGNhZWM3ZmJkMmJkcIhjtQ==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NTllNDE4M2ZkMjY5YTBhMTE4NTYwZDhkMzI0NziXBON7: 00:20:51.455 09:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.455 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.455 09:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:51.455 09:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.455 09:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.455 09:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.455 09:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:51.455 09:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:51.455 09:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:51.455 09:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:20:51.455 09:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:51.455 09:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:51.455 09:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:51.455 09:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:51.455 09:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.455 09:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:51.455 09:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.455 09:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.455 09:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.455 09:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:51.455 09:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:51.712 00:20:51.712 09:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:51.712 09:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:51.712 09:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.970 09:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.970 09:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.970 09:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.970 09:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.970 09:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.970 09:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:51.970 { 00:20:51.970 "cntlid": 63, 00:20:51.970 "qid": 0, 00:20:51.970 "state": "enabled", 00:20:51.970 "thread": "nvmf_tgt_poll_group_000", 00:20:51.970 "listen_address": { 00:20:51.970 "trtype": "TCP", 00:20:51.970 "adrfam": "IPv4", 00:20:51.970 "traddr": "10.0.0.2", 00:20:51.970 "trsvcid": "4420" 00:20:51.970 }, 00:20:51.970 "peer_address": { 00:20:51.970 "trtype": "TCP", 00:20:51.970 "adrfam": "IPv4", 00:20:51.970 "traddr": "10.0.0.1", 00:20:51.970 "trsvcid": "33032" 00:20:51.970 }, 00:20:51.970 "auth": { 00:20:51.970 "state": "completed", 00:20:51.970 "digest": "sha384", 00:20:51.970 "dhgroup": "ffdhe2048" 00:20:51.970 } 00:20:51.970 } 00:20:51.970 ]' 00:20:51.970 09:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:52.227 09:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:52.227 09:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:52.227 09:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:52.227 09:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:52.227 09:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.227 09:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.227 09:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.485 09:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZDc0MDkzMjBjZjhlOTY1N2VlY2E2N2E5MGYxZDFhODliMWFjOTcyMzI2MzQyZDZmYmE2ODcxYzhlZDM1OTE4NgHa9ec=: 00:20:53.418 09:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.419 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.419 09:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:53.419 09:30:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.419 09:30:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.419 09:30:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.419 09:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:53.419 09:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:53.419 09:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:53.419 09:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:53.677 09:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:20:53.677 09:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:53.677 09:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:53.677 09:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:53.677 09:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:53.677 09:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.677 09:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.677 09:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.677 09:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.677 09:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.677 09:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.677 09:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.935 00:20:54.192 09:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:54.192 09:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:54.192 09:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.192 09:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.192 09:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.192 09:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.192 09:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.449 09:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.449 09:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:54.449 { 00:20:54.449 "cntlid": 65, 00:20:54.449 "qid": 0, 00:20:54.449 "state": "enabled", 00:20:54.449 "thread": "nvmf_tgt_poll_group_000", 00:20:54.449 "listen_address": { 00:20:54.449 "trtype": "TCP", 00:20:54.449 "adrfam": "IPv4", 00:20:54.449 "traddr": "10.0.0.2", 00:20:54.449 "trsvcid": "4420" 00:20:54.449 }, 00:20:54.449 "peer_address": { 00:20:54.449 "trtype": "TCP", 00:20:54.449 "adrfam": "IPv4", 00:20:54.449 "traddr": "10.0.0.1", 00:20:54.449 "trsvcid": "59944" 00:20:54.449 }, 00:20:54.449 "auth": { 00:20:54.449 "state": "completed", 00:20:54.449 "digest": "sha384", 00:20:54.449 "dhgroup": "ffdhe3072" 00:20:54.449 } 00:20:54.449 } 00:20:54.449 ]' 00:20:54.449 09:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:54.449 09:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:54.449 09:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:54.449 09:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:54.449 09:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:54.449 09:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.449 09:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.449 09:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.706 09:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NzQ1YjdmZDVmOTNlNTAxZTM0YTAzMGFmZDc4MTY5ZmM1N2MxNGZjMmRlMmVhZWEwIyEwPg==: --dhchap-ctrl-secret DHHC-1:03:ZTNhNmFiYTIyNmNjODY3YTdkMDYyNWMwMGIyMGE0NzdjNDM2YzJlZjE5YzA0ZDg1MWZkNzQwZTVjYTNjZjVhZBT0aQU=: 00:20:55.640 09:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.640 09:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:55.640 09:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.640 09:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.640 09:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.640 09:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:55.640 09:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:55.640 09:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:55.898 09:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:20:55.898 09:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:55.898 09:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:55.898 09:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:55.898 09:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:55.898 09:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.898 09:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.898 09:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.898 09:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.898 09:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.898 09:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.898 09:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.463 00:20:56.463 09:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:56.463 09:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:56.463 09:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.720 09:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.720 09:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.720 09:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.720 09:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.720 09:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.720 09:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:56.720 { 00:20:56.720 "cntlid": 67, 00:20:56.720 "qid": 0, 00:20:56.720 "state": "enabled", 00:20:56.720 "thread": "nvmf_tgt_poll_group_000", 00:20:56.720 "listen_address": { 00:20:56.720 "trtype": "TCP", 00:20:56.720 "adrfam": "IPv4", 00:20:56.720 "traddr": "10.0.0.2", 00:20:56.720 "trsvcid": "4420" 00:20:56.720 }, 00:20:56.720 "peer_address": { 00:20:56.720 "trtype": "TCP", 00:20:56.720 "adrfam": "IPv4", 00:20:56.720 "traddr": "10.0.0.1", 00:20:56.720 "trsvcid": "59978" 00:20:56.720 }, 00:20:56.720 "auth": { 00:20:56.720 "state": "completed", 00:20:56.720 "digest": "sha384", 00:20:56.720 "dhgroup": "ffdhe3072" 00:20:56.720 } 00:20:56.720 } 00:20:56.720 ]' 00:20:56.720 09:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:56.720 09:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:56.720 09:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:56.720 09:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:56.720 09:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:56.720 09:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.720 09:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.720 09:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.977 09:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MGZjM2VhZDFmMDg1OTVlODc3OWIwODUwZjNkOWIwYzNGi3HN: --dhchap-ctrl-secret DHHC-1:02:YzQxZmU0MGQ3ZjQ4MGNiNzVhNjcwNWU4MTcwZmZmZGQwNDIyYTcwMWE3ZjFkYTVjZh8NIw==: 00:20:57.908 09:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.908 09:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:57.908 09:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.908 09:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.165 09:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.165 09:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:58.165 09:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:58.165 09:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:58.424 09:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:20:58.424 09:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:58.424 09:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:58.424 09:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:58.424 09:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:58.424 09:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.424 09:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.424 09:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.424 09:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.424 09:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.424 09:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.424 09:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.681 00:20:58.681 09:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:58.681 09:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:58.681 09:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.939 09:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.939 09:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.939 09:30:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.939 09:30:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.939 09:30:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.939 09:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:58.939 { 00:20:58.939 "cntlid": 69, 00:20:58.939 "qid": 0, 00:20:58.939 "state": "enabled", 00:20:58.939 "thread": "nvmf_tgt_poll_group_000", 00:20:58.939 "listen_address": { 00:20:58.939 "trtype": "TCP", 00:20:58.939 "adrfam": "IPv4", 00:20:58.939 "traddr": "10.0.0.2", 00:20:58.939 "trsvcid": "4420" 00:20:58.939 }, 00:20:58.939 "peer_address": { 00:20:58.939 "trtype": "TCP", 00:20:58.939 "adrfam": "IPv4", 00:20:58.939 "traddr": "10.0.0.1", 00:20:58.939 "trsvcid": "60016" 00:20:58.939 }, 00:20:58.939 "auth": { 00:20:58.939 "state": "completed", 00:20:58.939 "digest": "sha384", 00:20:58.939 "dhgroup": "ffdhe3072" 00:20:58.939 } 00:20:58.939 } 00:20:58.939 ]' 00:20:58.939 09:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:58.939 09:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:58.939 09:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:58.939 09:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:58.939 09:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:58.939 09:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.939 09:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.939 09:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.196 09:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDE5N2YzOTY4OTNjMzgwYTM4MmQ0MzdkYWMzMTc1NWNlMzQzOGNhZWM3ZmJkMmJkcIhjtQ==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NTllNDE4M2ZkMjY5YTBhMTE4NTYwZDhkMzI0NziXBON7: 00:21:00.568 09:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.568 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.568 09:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:00.568 09:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.568 09:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.568 09:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.568 09:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:00.568 09:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:00.568 09:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:00.568 09:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:21:00.568 09:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:00.568 09:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:00.568 09:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:00.568 09:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:00.568 09:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.568 09:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:00.568 09:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.568 09:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.568 09:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.568 09:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:00.568 09:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:00.825 00:21:00.825 09:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:00.825 09:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:00.825 09:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.084 09:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.084 09:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.084 09:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.084 09:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.084 09:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.084 09:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:01.084 { 00:21:01.084 "cntlid": 71, 00:21:01.084 "qid": 0, 00:21:01.084 "state": "enabled", 00:21:01.084 "thread": "nvmf_tgt_poll_group_000", 00:21:01.084 "listen_address": { 00:21:01.084 "trtype": "TCP", 00:21:01.084 "adrfam": "IPv4", 00:21:01.084 "traddr": "10.0.0.2", 00:21:01.084 "trsvcid": "4420" 00:21:01.084 }, 00:21:01.084 "peer_address": { 00:21:01.084 "trtype": "TCP", 00:21:01.084 "adrfam": "IPv4", 00:21:01.084 "traddr": "10.0.0.1", 00:21:01.084 "trsvcid": "60036" 00:21:01.084 }, 00:21:01.084 "auth": { 00:21:01.084 "state": "completed", 00:21:01.084 "digest": "sha384", 00:21:01.084 "dhgroup": "ffdhe3072" 00:21:01.084 } 00:21:01.084 } 00:21:01.084 ]' 00:21:01.084 09:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:01.342 09:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:01.342 09:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:01.342 09:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:01.342 09:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:01.342 09:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.342 09:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.342 09:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.600 09:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZDc0MDkzMjBjZjhlOTY1N2VlY2E2N2E5MGYxZDFhODliMWFjOTcyMzI2MzQyZDZmYmE2ODcxYzhlZDM1OTE4NgHa9ec=: 00:21:02.534 09:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.534 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.534 09:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:02.534 09:30:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.534 09:30:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.534 09:30:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.534 09:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:02.534 09:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:02.534 09:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:02.534 09:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:02.798 09:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:21:02.798 09:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:02.798 09:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:02.798 09:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:02.798 09:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:02.798 09:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.798 09:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.798 09:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.798 09:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.798 09:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.798 09:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.798 09:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.388 00:21:03.388 09:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:03.388 09:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:03.388 09:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.646 09:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.646 09:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.646 09:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.646 09:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.646 09:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.646 09:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:03.646 { 00:21:03.646 "cntlid": 73, 00:21:03.646 "qid": 0, 00:21:03.646 "state": "enabled", 00:21:03.646 "thread": "nvmf_tgt_poll_group_000", 00:21:03.646 "listen_address": { 00:21:03.646 "trtype": "TCP", 00:21:03.646 "adrfam": "IPv4", 00:21:03.646 "traddr": "10.0.0.2", 00:21:03.646 "trsvcid": "4420" 00:21:03.646 }, 00:21:03.646 "peer_address": { 00:21:03.646 "trtype": "TCP", 00:21:03.646 "adrfam": "IPv4", 00:21:03.646 "traddr": "10.0.0.1", 00:21:03.646 "trsvcid": "60068" 00:21:03.646 }, 00:21:03.646 "auth": { 00:21:03.646 "state": "completed", 00:21:03.646 "digest": "sha384", 00:21:03.646 "dhgroup": "ffdhe4096" 00:21:03.646 } 00:21:03.646 } 00:21:03.646 ]' 00:21:03.646 09:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:03.646 09:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:03.646 09:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:03.646 09:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:03.646 09:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:03.646 09:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.646 09:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.646 09:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.904 09:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NzQ1YjdmZDVmOTNlNTAxZTM0YTAzMGFmZDc4MTY5ZmM1N2MxNGZjMmRlMmVhZWEwIyEwPg==: --dhchap-ctrl-secret DHHC-1:03:ZTNhNmFiYTIyNmNjODY3YTdkMDYyNWMwMGIyMGE0NzdjNDM2YzJlZjE5YzA0ZDg1MWZkNzQwZTVjYTNjZjVhZBT0aQU=: 00:21:04.838 09:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.838 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.838 09:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:04.838 09:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.838 09:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.838 09:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.838 09:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:04.838 09:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:04.838 09:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:05.096 09:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:21:05.096 09:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:05.096 09:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:05.096 09:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:05.096 09:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:05.096 09:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.096 09:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.096 09:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.096 09:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.096 09:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.096 09:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.096 09:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.661 00:21:05.661 09:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:05.661 09:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:05.661 09:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.919 09:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.919 09:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.919 09:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.919 09:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.919 09:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.919 09:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:05.919 { 00:21:05.919 "cntlid": 75, 00:21:05.919 "qid": 0, 00:21:05.919 "state": "enabled", 00:21:05.919 "thread": "nvmf_tgt_poll_group_000", 00:21:05.919 "listen_address": { 00:21:05.919 "trtype": "TCP", 00:21:05.919 "adrfam": "IPv4", 00:21:05.919 "traddr": "10.0.0.2", 00:21:05.919 "trsvcid": "4420" 00:21:05.919 }, 00:21:05.919 "peer_address": { 00:21:05.919 "trtype": "TCP", 00:21:05.919 "adrfam": "IPv4", 00:21:05.919 "traddr": "10.0.0.1", 00:21:05.919 "trsvcid": "52680" 00:21:05.919 }, 00:21:05.919 "auth": { 00:21:05.919 "state": "completed", 00:21:05.919 "digest": "sha384", 00:21:05.919 "dhgroup": "ffdhe4096" 00:21:05.919 } 00:21:05.919 } 00:21:05.919 ]' 00:21:05.919 09:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:05.919 09:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:05.919 09:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:05.919 09:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:05.919 09:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:05.919 09:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.919 09:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.919 09:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.177 09:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MGZjM2VhZDFmMDg1OTVlODc3OWIwODUwZjNkOWIwYzNGi3HN: --dhchap-ctrl-secret DHHC-1:02:YzQxZmU0MGQ3ZjQ4MGNiNzVhNjcwNWU4MTcwZmZmZGQwNDIyYTcwMWE3ZjFkYTVjZh8NIw==: 00:21:07.111 09:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.111 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.111 09:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:07.111 09:30:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.111 09:30:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.111 09:30:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.111 09:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:07.111 09:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:07.111 09:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:07.677 09:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:21:07.677 09:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:07.677 09:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:07.677 09:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:07.677 09:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:07.677 09:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.677 09:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.677 09:30:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.677 09:30:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.677 09:30:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.677 09:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.677 09:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.935 00:21:07.935 09:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:07.935 09:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.935 09:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:08.193 09:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.193 09:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.193 09:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.193 09:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.193 09:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.193 09:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:08.193 { 00:21:08.193 "cntlid": 77, 00:21:08.193 "qid": 0, 00:21:08.193 "state": "enabled", 00:21:08.193 "thread": "nvmf_tgt_poll_group_000", 00:21:08.193 "listen_address": { 00:21:08.193 "trtype": "TCP", 00:21:08.193 "adrfam": "IPv4", 00:21:08.193 "traddr": "10.0.0.2", 00:21:08.193 "trsvcid": "4420" 00:21:08.193 }, 00:21:08.193 "peer_address": { 00:21:08.193 "trtype": "TCP", 00:21:08.193 "adrfam": "IPv4", 00:21:08.193 "traddr": "10.0.0.1", 00:21:08.193 "trsvcid": "52706" 00:21:08.193 }, 00:21:08.193 "auth": { 00:21:08.193 "state": "completed", 00:21:08.193 "digest": "sha384", 00:21:08.193 "dhgroup": "ffdhe4096" 00:21:08.193 } 00:21:08.193 } 00:21:08.193 ]' 00:21:08.193 09:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:08.193 09:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:08.193 09:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:08.193 09:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:08.193 09:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:08.193 09:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.193 09:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.193 09:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.451 09:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDE5N2YzOTY4OTNjMzgwYTM4MmQ0MzdkYWMzMTc1NWNlMzQzOGNhZWM3ZmJkMmJkcIhjtQ==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NTllNDE4M2ZkMjY5YTBhMTE4NTYwZDhkMzI0NziXBON7: 00:21:09.381 09:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.637 09:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:09.637 09:30:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.637 09:30:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.637 09:30:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.637 09:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:09.637 09:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:09.637 09:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:09.894 09:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:21:09.894 09:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:09.894 09:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:09.894 09:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:09.894 09:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:09.894 09:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.894 09:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:09.894 09:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.894 09:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.894 09:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.894 09:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:09.894 09:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:10.152 00:21:10.152 09:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:10.152 09:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:10.152 09:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.409 09:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.409 09:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.409 09:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.409 09:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.409 09:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.409 09:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:10.409 { 00:21:10.409 "cntlid": 79, 00:21:10.409 "qid": 0, 00:21:10.409 "state": "enabled", 00:21:10.409 "thread": "nvmf_tgt_poll_group_000", 00:21:10.409 "listen_address": { 00:21:10.409 "trtype": "TCP", 00:21:10.409 "adrfam": "IPv4", 00:21:10.409 "traddr": "10.0.0.2", 00:21:10.409 "trsvcid": "4420" 00:21:10.409 }, 00:21:10.409 "peer_address": { 00:21:10.409 "trtype": "TCP", 00:21:10.409 "adrfam": "IPv4", 00:21:10.409 "traddr": "10.0.0.1", 00:21:10.409 "trsvcid": "52718" 00:21:10.409 }, 00:21:10.409 "auth": { 00:21:10.409 "state": "completed", 00:21:10.409 "digest": "sha384", 00:21:10.409 "dhgroup": "ffdhe4096" 00:21:10.409 } 00:21:10.409 } 00:21:10.409 ]' 00:21:10.409 09:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:10.409 09:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:10.409 09:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:10.409 09:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:10.409 09:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:10.667 09:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.667 09:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.667 09:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.950 09:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZDc0MDkzMjBjZjhlOTY1N2VlY2E2N2E5MGYxZDFhODliMWFjOTcyMzI2MzQyZDZmYmE2ODcxYzhlZDM1OTE4NgHa9ec=: 00:21:11.883 09:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.883 09:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:11.883 09:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.883 09:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.883 09:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.883 09:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:11.883 09:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:11.883 09:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:11.883 09:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:11.883 09:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:21:11.883 09:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:11.883 09:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:11.883 09:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:11.883 09:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:11.883 09:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.883 09:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.883 09:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.883 09:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.140 09:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.140 09:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.141 09:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.704 00:21:12.704 09:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:12.704 09:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.704 09:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:12.704 09:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.704 09:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.704 09:30:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.704 09:30:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.704 09:30:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.704 09:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:12.704 { 00:21:12.704 "cntlid": 81, 00:21:12.704 "qid": 0, 00:21:12.704 "state": "enabled", 00:21:12.704 "thread": "nvmf_tgt_poll_group_000", 00:21:12.704 "listen_address": { 00:21:12.704 "trtype": "TCP", 00:21:12.704 "adrfam": "IPv4", 00:21:12.704 "traddr": "10.0.0.2", 00:21:12.704 "trsvcid": "4420" 00:21:12.704 }, 00:21:12.704 "peer_address": { 00:21:12.704 "trtype": "TCP", 00:21:12.704 "adrfam": "IPv4", 00:21:12.704 "traddr": "10.0.0.1", 00:21:12.704 "trsvcid": "52746" 00:21:12.704 }, 00:21:12.704 "auth": { 00:21:12.704 "state": "completed", 00:21:12.704 "digest": "sha384", 00:21:12.704 "dhgroup": "ffdhe6144" 00:21:12.704 } 00:21:12.704 } 00:21:12.704 ]' 00:21:12.704 09:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:12.961 09:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:12.961 09:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:12.961 09:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:12.961 09:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:12.961 09:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.961 09:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.961 09:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.217 09:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NzQ1YjdmZDVmOTNlNTAxZTM0YTAzMGFmZDc4MTY5ZmM1N2MxNGZjMmRlMmVhZWEwIyEwPg==: --dhchap-ctrl-secret DHHC-1:03:ZTNhNmFiYTIyNmNjODY3YTdkMDYyNWMwMGIyMGE0NzdjNDM2YzJlZjE5YzA0ZDg1MWZkNzQwZTVjYTNjZjVhZBT0aQU=: 00:21:14.147 09:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.147 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.148 09:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:14.148 09:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.148 09:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.148 09:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.148 09:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:14.148 09:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:14.148 09:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:14.406 09:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:21:14.406 09:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:14.406 09:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:14.406 09:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:14.406 09:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:14.406 09:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.406 09:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.406 09:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.406 09:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.406 09:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.406 09:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.406 09:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.971 00:21:14.971 09:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:14.971 09:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:14.971 09:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.229 09:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.229 09:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.229 09:30:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.229 09:30:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.229 09:30:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.229 09:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:15.229 { 00:21:15.229 "cntlid": 83, 00:21:15.229 "qid": 0, 00:21:15.229 "state": "enabled", 00:21:15.229 "thread": "nvmf_tgt_poll_group_000", 00:21:15.229 "listen_address": { 00:21:15.229 "trtype": "TCP", 00:21:15.229 "adrfam": "IPv4", 00:21:15.229 "traddr": "10.0.0.2", 00:21:15.229 "trsvcid": "4420" 00:21:15.229 }, 00:21:15.229 "peer_address": { 00:21:15.229 "trtype": "TCP", 00:21:15.229 "adrfam": "IPv4", 00:21:15.229 "traddr": "10.0.0.1", 00:21:15.229 "trsvcid": "38476" 00:21:15.229 }, 00:21:15.229 "auth": { 00:21:15.229 "state": "completed", 00:21:15.229 "digest": "sha384", 00:21:15.229 "dhgroup": "ffdhe6144" 00:21:15.229 } 00:21:15.229 } 00:21:15.229 ]' 00:21:15.229 09:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:15.229 09:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:15.229 09:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:15.229 09:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:15.229 09:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:15.229 09:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.229 09:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.229 09:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.489 09:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MGZjM2VhZDFmMDg1OTVlODc3OWIwODUwZjNkOWIwYzNGi3HN: --dhchap-ctrl-secret DHHC-1:02:YzQxZmU0MGQ3ZjQ4MGNiNzVhNjcwNWU4MTcwZmZmZGQwNDIyYTcwMWE3ZjFkYTVjZh8NIw==: 00:21:16.423 09:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.423 09:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:16.423 09:31:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.423 09:31:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.423 09:31:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.423 09:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:16.423 09:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:16.423 09:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:16.681 09:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:21:16.681 09:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:16.681 09:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:16.681 09:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:16.681 09:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:16.681 09:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.681 09:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.681 09:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.681 09:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.939 09:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.939 09:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.939 09:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.504 00:21:17.504 09:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:17.504 09:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:17.504 09:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.504 09:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.504 09:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.504 09:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.504 09:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.504 09:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.504 09:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:17.504 { 00:21:17.504 "cntlid": 85, 00:21:17.504 "qid": 0, 00:21:17.504 "state": "enabled", 00:21:17.504 "thread": "nvmf_tgt_poll_group_000", 00:21:17.504 "listen_address": { 00:21:17.504 "trtype": "TCP", 00:21:17.504 "adrfam": "IPv4", 00:21:17.504 "traddr": "10.0.0.2", 00:21:17.504 "trsvcid": "4420" 00:21:17.504 }, 00:21:17.504 "peer_address": { 00:21:17.504 "trtype": "TCP", 00:21:17.504 "adrfam": "IPv4", 00:21:17.504 "traddr": "10.0.0.1", 00:21:17.504 "trsvcid": "38498" 00:21:17.504 }, 00:21:17.504 "auth": { 00:21:17.504 "state": "completed", 00:21:17.504 "digest": "sha384", 00:21:17.504 "dhgroup": "ffdhe6144" 00:21:17.504 } 00:21:17.504 } 00:21:17.504 ]' 00:21:17.504 09:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:17.762 09:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:17.762 09:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:17.762 09:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:17.762 09:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:17.762 09:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.762 09:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.762 09:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.019 09:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDE5N2YzOTY4OTNjMzgwYTM4MmQ0MzdkYWMzMTc1NWNlMzQzOGNhZWM3ZmJkMmJkcIhjtQ==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NTllNDE4M2ZkMjY5YTBhMTE4NTYwZDhkMzI0NziXBON7: 00:21:18.951 09:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.951 09:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:18.951 09:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.951 09:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.951 09:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.951 09:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:18.951 09:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:18.951 09:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:19.208 09:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:21:19.208 09:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:19.208 09:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:19.208 09:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:19.208 09:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:19.208 09:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.208 09:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:19.208 09:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.208 09:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.208 09:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.208 09:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:19.208 09:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:19.773 00:21:19.773 09:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:19.773 09:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.773 09:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:20.031 09:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.031 09:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.031 09:31:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.031 09:31:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.031 09:31:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.031 09:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:20.031 { 00:21:20.031 "cntlid": 87, 00:21:20.031 "qid": 0, 00:21:20.031 "state": "enabled", 00:21:20.031 "thread": "nvmf_tgt_poll_group_000", 00:21:20.031 "listen_address": { 00:21:20.031 "trtype": "TCP", 00:21:20.031 "adrfam": "IPv4", 00:21:20.031 "traddr": "10.0.0.2", 00:21:20.031 "trsvcid": "4420" 00:21:20.031 }, 00:21:20.031 "peer_address": { 00:21:20.031 "trtype": "TCP", 00:21:20.031 "adrfam": "IPv4", 00:21:20.031 "traddr": "10.0.0.1", 00:21:20.031 "trsvcid": "38520" 00:21:20.031 }, 00:21:20.031 "auth": { 00:21:20.031 "state": "completed", 00:21:20.031 "digest": "sha384", 00:21:20.031 "dhgroup": "ffdhe6144" 00:21:20.031 } 00:21:20.031 } 00:21:20.031 ]' 00:21:20.031 09:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:20.031 09:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:20.031 09:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:20.031 09:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:20.031 09:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:20.031 09:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.031 09:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.031 09:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.288 09:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZDc0MDkzMjBjZjhlOTY1N2VlY2E2N2E5MGYxZDFhODliMWFjOTcyMzI2MzQyZDZmYmE2ODcxYzhlZDM1OTE4NgHa9ec=: 00:21:21.220 09:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.478 09:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:21.478 09:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.478 09:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.478 09:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.478 09:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:21.478 09:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:21.478 09:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:21.478 09:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:21.736 09:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:21:21.736 09:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:21.736 09:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:21.736 09:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:21.736 09:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:21.736 09:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.736 09:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.736 09:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.736 09:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.736 09:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.736 09:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.736 09:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.670 00:21:22.670 09:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:22.670 09:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:22.670 09:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.928 09:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.928 09:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.928 09:31:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.928 09:31:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.928 09:31:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.928 09:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:22.928 { 00:21:22.928 "cntlid": 89, 00:21:22.928 "qid": 0, 00:21:22.928 "state": "enabled", 00:21:22.928 "thread": "nvmf_tgt_poll_group_000", 00:21:22.928 "listen_address": { 00:21:22.928 "trtype": "TCP", 00:21:22.928 "adrfam": "IPv4", 00:21:22.928 "traddr": "10.0.0.2", 00:21:22.928 "trsvcid": "4420" 00:21:22.928 }, 00:21:22.928 "peer_address": { 00:21:22.928 "trtype": "TCP", 00:21:22.928 "adrfam": "IPv4", 00:21:22.928 "traddr": "10.0.0.1", 00:21:22.928 "trsvcid": "38536" 00:21:22.928 }, 00:21:22.928 "auth": { 00:21:22.928 "state": "completed", 00:21:22.928 "digest": "sha384", 00:21:22.928 "dhgroup": "ffdhe8192" 00:21:22.928 } 00:21:22.928 } 00:21:22.928 ]' 00:21:22.928 09:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:22.928 09:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:22.928 09:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:22.928 09:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:22.928 09:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:22.928 09:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.928 09:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.928 09:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.185 09:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NzQ1YjdmZDVmOTNlNTAxZTM0YTAzMGFmZDc4MTY5ZmM1N2MxNGZjMmRlMmVhZWEwIyEwPg==: --dhchap-ctrl-secret DHHC-1:03:ZTNhNmFiYTIyNmNjODY3YTdkMDYyNWMwMGIyMGE0NzdjNDM2YzJlZjE5YzA0ZDg1MWZkNzQwZTVjYTNjZjVhZBT0aQU=: 00:21:24.554 09:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.555 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.555 09:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:24.555 09:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.555 09:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.555 09:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.555 09:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:24.555 09:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:24.555 09:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:24.555 09:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:21:24.555 09:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:24.555 09:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:24.555 09:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:24.555 09:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:24.555 09:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.555 09:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.555 09:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.555 09:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.555 09:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.555 09:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.555 09:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.489 00:21:25.489 09:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:25.489 09:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:25.489 09:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.748 09:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.748 09:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.748 09:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.748 09:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.748 09:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.748 09:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:25.748 { 00:21:25.748 "cntlid": 91, 00:21:25.748 "qid": 0, 00:21:25.748 "state": "enabled", 00:21:25.748 "thread": "nvmf_tgt_poll_group_000", 00:21:25.748 "listen_address": { 00:21:25.748 "trtype": "TCP", 00:21:25.748 "adrfam": "IPv4", 00:21:25.748 "traddr": "10.0.0.2", 00:21:25.748 "trsvcid": "4420" 00:21:25.748 }, 00:21:25.748 "peer_address": { 00:21:25.748 "trtype": "TCP", 00:21:25.748 "adrfam": "IPv4", 00:21:25.748 "traddr": "10.0.0.1", 00:21:25.748 "trsvcid": "37448" 00:21:25.748 }, 00:21:25.748 "auth": { 00:21:25.748 "state": "completed", 00:21:25.748 "digest": "sha384", 00:21:25.748 "dhgroup": "ffdhe8192" 00:21:25.748 } 00:21:25.748 } 00:21:25.748 ]' 00:21:25.748 09:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:25.748 09:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:25.748 09:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:25.748 09:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:25.748 09:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:25.748 09:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.748 09:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.748 09:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.315 09:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MGZjM2VhZDFmMDg1OTVlODc3OWIwODUwZjNkOWIwYzNGi3HN: --dhchap-ctrl-secret DHHC-1:02:YzQxZmU0MGQ3ZjQ4MGNiNzVhNjcwNWU4MTcwZmZmZGQwNDIyYTcwMWE3ZjFkYTVjZh8NIw==: 00:21:27.250 09:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.250 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.250 09:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:27.250 09:31:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.250 09:31:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.250 09:31:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.250 09:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:27.250 09:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:27.250 09:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:27.508 09:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:21:27.508 09:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:27.508 09:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:27.508 09:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:27.508 09:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:27.508 09:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:27.508 09:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.508 09:31:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.508 09:31:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.508 09:31:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.508 09:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.508 09:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.443 00:21:28.443 09:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:28.443 09:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:28.443 09:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.443 09:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.443 09:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.443 09:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.443 09:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.443 09:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.443 09:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:28.443 { 00:21:28.443 "cntlid": 93, 00:21:28.443 "qid": 0, 00:21:28.443 "state": "enabled", 00:21:28.443 "thread": "nvmf_tgt_poll_group_000", 00:21:28.443 "listen_address": { 00:21:28.443 "trtype": "TCP", 00:21:28.443 "adrfam": "IPv4", 00:21:28.443 "traddr": "10.0.0.2", 00:21:28.443 "trsvcid": "4420" 00:21:28.443 }, 00:21:28.443 "peer_address": { 00:21:28.443 "trtype": "TCP", 00:21:28.443 "adrfam": "IPv4", 00:21:28.443 "traddr": "10.0.0.1", 00:21:28.443 "trsvcid": "37472" 00:21:28.443 }, 00:21:28.443 "auth": { 00:21:28.443 "state": "completed", 00:21:28.443 "digest": "sha384", 00:21:28.443 "dhgroup": "ffdhe8192" 00:21:28.443 } 00:21:28.443 } 00:21:28.443 ]' 00:21:28.443 09:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:28.705 09:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:28.705 09:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:28.705 09:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:28.705 09:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:28.705 09:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.705 09:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.705 09:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.961 09:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDE5N2YzOTY4OTNjMzgwYTM4MmQ0MzdkYWMzMTc1NWNlMzQzOGNhZWM3ZmJkMmJkcIhjtQ==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NTllNDE4M2ZkMjY5YTBhMTE4NTYwZDhkMzI0NziXBON7: 00:21:29.893 09:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.893 09:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:29.893 09:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.893 09:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.893 09:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.893 09:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:29.893 09:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:29.893 09:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:30.151 09:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:21:30.151 09:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:30.151 09:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:30.151 09:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:30.151 09:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:30.151 09:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.151 09:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:30.151 09:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.151 09:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.151 09:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.151 09:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:30.151 09:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:31.084 00:21:31.084 09:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:31.084 09:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:31.084 09:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.342 09:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.342 09:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.342 09:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.342 09:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.342 09:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.342 09:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:31.342 { 00:21:31.342 "cntlid": 95, 00:21:31.342 "qid": 0, 00:21:31.342 "state": "enabled", 00:21:31.342 "thread": "nvmf_tgt_poll_group_000", 00:21:31.342 "listen_address": { 00:21:31.342 "trtype": "TCP", 00:21:31.342 "adrfam": "IPv4", 00:21:31.342 "traddr": "10.0.0.2", 00:21:31.342 "trsvcid": "4420" 00:21:31.342 }, 00:21:31.342 "peer_address": { 00:21:31.342 "trtype": "TCP", 00:21:31.342 "adrfam": "IPv4", 00:21:31.342 "traddr": "10.0.0.1", 00:21:31.342 "trsvcid": "37510" 00:21:31.342 }, 00:21:31.342 "auth": { 00:21:31.342 "state": "completed", 00:21:31.342 "digest": "sha384", 00:21:31.342 "dhgroup": "ffdhe8192" 00:21:31.342 } 00:21:31.342 } 00:21:31.342 ]' 00:21:31.342 09:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:31.342 09:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:31.342 09:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:31.342 09:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:31.342 09:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:31.600 09:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.600 09:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.600 09:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.858 09:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZDc0MDkzMjBjZjhlOTY1N2VlY2E2N2E5MGYxZDFhODliMWFjOTcyMzI2MzQyZDZmYmE2ODcxYzhlZDM1OTE4NgHa9ec=: 00:21:32.792 09:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.792 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.792 09:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:32.792 09:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.792 09:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.792 09:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.792 09:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:21:32.792 09:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:32.792 09:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:32.792 09:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:32.792 09:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:33.050 09:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:21:33.050 09:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:33.050 09:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:33.050 09:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:33.050 09:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:33.050 09:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.050 09:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.050 09:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.050 09:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.050 09:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.050 09:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.050 09:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.308 00:21:33.308 09:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:33.308 09:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.308 09:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:33.566 09:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.566 09:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.566 09:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.566 09:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.566 09:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.566 09:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:33.566 { 00:21:33.566 "cntlid": 97, 00:21:33.566 "qid": 0, 00:21:33.566 "state": "enabled", 00:21:33.566 "thread": "nvmf_tgt_poll_group_000", 00:21:33.566 "listen_address": { 00:21:33.566 "trtype": "TCP", 00:21:33.566 "adrfam": "IPv4", 00:21:33.566 "traddr": "10.0.0.2", 00:21:33.566 "trsvcid": "4420" 00:21:33.566 }, 00:21:33.566 "peer_address": { 00:21:33.566 "trtype": "TCP", 00:21:33.566 "adrfam": "IPv4", 00:21:33.566 "traddr": "10.0.0.1", 00:21:33.566 "trsvcid": "37550" 00:21:33.566 }, 00:21:33.566 "auth": { 00:21:33.566 "state": "completed", 00:21:33.566 "digest": "sha512", 00:21:33.566 "dhgroup": "null" 00:21:33.566 } 00:21:33.566 } 00:21:33.566 ]' 00:21:33.566 09:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:33.566 09:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:33.566 09:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:33.566 09:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:33.566 09:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:33.566 09:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.566 09:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.566 09:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.825 09:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NzQ1YjdmZDVmOTNlNTAxZTM0YTAzMGFmZDc4MTY5ZmM1N2MxNGZjMmRlMmVhZWEwIyEwPg==: --dhchap-ctrl-secret DHHC-1:03:ZTNhNmFiYTIyNmNjODY3YTdkMDYyNWMwMGIyMGE0NzdjNDM2YzJlZjE5YzA0ZDg1MWZkNzQwZTVjYTNjZjVhZBT0aQU=: 00:21:34.759 09:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.759 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.759 09:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:34.759 09:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.759 09:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.759 09:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.759 09:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:34.759 09:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:34.759 09:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:35.017 09:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:21:35.017 09:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:35.017 09:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:35.017 09:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:35.017 09:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:35.017 09:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.017 09:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.017 09:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.017 09:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.017 09:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.017 09:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.017 09:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.275 00:21:35.275 09:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:35.275 09:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:35.275 09:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.533 09:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.533 09:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.533 09:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.533 09:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.533 09:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.533 09:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:35.533 { 00:21:35.533 "cntlid": 99, 00:21:35.533 "qid": 0, 00:21:35.533 "state": "enabled", 00:21:35.533 "thread": "nvmf_tgt_poll_group_000", 00:21:35.533 "listen_address": { 00:21:35.533 "trtype": "TCP", 00:21:35.533 "adrfam": "IPv4", 00:21:35.533 "traddr": "10.0.0.2", 00:21:35.533 "trsvcid": "4420" 00:21:35.533 }, 00:21:35.533 "peer_address": { 00:21:35.533 "trtype": "TCP", 00:21:35.533 "adrfam": "IPv4", 00:21:35.533 "traddr": "10.0.0.1", 00:21:35.533 "trsvcid": "57206" 00:21:35.533 }, 00:21:35.533 "auth": { 00:21:35.533 "state": "completed", 00:21:35.533 "digest": "sha512", 00:21:35.533 "dhgroup": "null" 00:21:35.533 } 00:21:35.533 } 00:21:35.533 ]' 00:21:35.533 09:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:35.791 09:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:35.791 09:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:35.791 09:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:35.791 09:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:35.791 09:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.791 09:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.791 09:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.049 09:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MGZjM2VhZDFmMDg1OTVlODc3OWIwODUwZjNkOWIwYzNGi3HN: --dhchap-ctrl-secret DHHC-1:02:YzQxZmU0MGQ3ZjQ4MGNiNzVhNjcwNWU4MTcwZmZmZGQwNDIyYTcwMWE3ZjFkYTVjZh8NIw==: 00:21:36.982 09:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.982 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.982 09:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:36.982 09:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.982 09:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.982 09:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.982 09:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:36.982 09:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:36.982 09:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:37.240 09:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:21:37.240 09:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:37.240 09:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:37.240 09:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:37.240 09:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:37.240 09:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.240 09:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.240 09:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.240 09:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.240 09:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.240 09:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.240 09:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.498 00:21:37.498 09:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:37.498 09:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:37.498 09:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.756 09:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.756 09:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.756 09:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.756 09:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.756 09:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.756 09:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:37.756 { 00:21:37.756 "cntlid": 101, 00:21:37.756 "qid": 0, 00:21:37.756 "state": "enabled", 00:21:37.756 "thread": "nvmf_tgt_poll_group_000", 00:21:37.756 "listen_address": { 00:21:37.756 "trtype": "TCP", 00:21:37.756 "adrfam": "IPv4", 00:21:37.756 "traddr": "10.0.0.2", 00:21:37.756 "trsvcid": "4420" 00:21:37.756 }, 00:21:37.756 "peer_address": { 00:21:37.756 "trtype": "TCP", 00:21:37.756 "adrfam": "IPv4", 00:21:37.756 "traddr": "10.0.0.1", 00:21:37.756 "trsvcid": "57228" 00:21:37.756 }, 00:21:37.756 "auth": { 00:21:37.756 "state": "completed", 00:21:37.756 "digest": "sha512", 00:21:37.756 "dhgroup": "null" 00:21:37.756 } 00:21:37.756 } 00:21:37.756 ]' 00:21:37.756 09:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:37.756 09:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:37.756 09:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:37.756 09:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:37.756 09:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:38.014 09:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.014 09:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.014 09:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.271 09:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDE5N2YzOTY4OTNjMzgwYTM4MmQ0MzdkYWMzMTc1NWNlMzQzOGNhZWM3ZmJkMmJkcIhjtQ==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NTllNDE4M2ZkMjY5YTBhMTE4NTYwZDhkMzI0NziXBON7: 00:21:39.203 09:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.203 09:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:39.203 09:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.203 09:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.203 09:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.203 09:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:39.203 09:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:39.203 09:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:39.461 09:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:21:39.461 09:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:39.461 09:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:39.461 09:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:39.461 09:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:39.461 09:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.461 09:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:39.461 09:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.461 09:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.461 09:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.461 09:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:39.461 09:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:39.719 00:21:39.719 09:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:39.719 09:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:39.719 09:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.976 09:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.976 09:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.976 09:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.976 09:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.976 09:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.976 09:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:39.976 { 00:21:39.976 "cntlid": 103, 00:21:39.976 "qid": 0, 00:21:39.976 "state": "enabled", 00:21:39.976 "thread": "nvmf_tgt_poll_group_000", 00:21:39.976 "listen_address": { 00:21:39.976 "trtype": "TCP", 00:21:39.976 "adrfam": "IPv4", 00:21:39.976 "traddr": "10.0.0.2", 00:21:39.976 "trsvcid": "4420" 00:21:39.977 }, 00:21:39.977 "peer_address": { 00:21:39.977 "trtype": "TCP", 00:21:39.977 "adrfam": "IPv4", 00:21:39.977 "traddr": "10.0.0.1", 00:21:39.977 "trsvcid": "57262" 00:21:39.977 }, 00:21:39.977 "auth": { 00:21:39.977 "state": "completed", 00:21:39.977 "digest": "sha512", 00:21:39.977 "dhgroup": "null" 00:21:39.977 } 00:21:39.977 } 00:21:39.977 ]' 00:21:39.977 09:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:39.977 09:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:39.977 09:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:39.977 09:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:39.977 09:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:39.977 09:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.977 09:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.977 09:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.235 09:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZDc0MDkzMjBjZjhlOTY1N2VlY2E2N2E5MGYxZDFhODliMWFjOTcyMzI2MzQyZDZmYmE2ODcxYzhlZDM1OTE4NgHa9ec=: 00:21:41.176 09:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.176 09:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:41.176 09:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.176 09:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.176 09:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.176 09:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:41.176 09:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:41.176 09:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:41.176 09:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:41.460 09:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:21:41.460 09:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:41.460 09:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:41.460 09:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:41.460 09:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:41.460 09:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.460 09:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.460 09:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.460 09:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.460 09:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.460 09:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.460 09:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.717 00:21:41.718 09:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:41.718 09:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:41.718 09:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.976 09:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.976 09:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.976 09:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.976 09:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.233 09:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.233 09:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:42.233 { 00:21:42.233 "cntlid": 105, 00:21:42.233 "qid": 0, 00:21:42.233 "state": "enabled", 00:21:42.233 "thread": "nvmf_tgt_poll_group_000", 00:21:42.233 "listen_address": { 00:21:42.233 "trtype": "TCP", 00:21:42.233 "adrfam": "IPv4", 00:21:42.233 "traddr": "10.0.0.2", 00:21:42.233 "trsvcid": "4420" 00:21:42.233 }, 00:21:42.233 "peer_address": { 00:21:42.233 "trtype": "TCP", 00:21:42.233 "adrfam": "IPv4", 00:21:42.233 "traddr": "10.0.0.1", 00:21:42.233 "trsvcid": "57298" 00:21:42.233 }, 00:21:42.233 "auth": { 00:21:42.233 "state": "completed", 00:21:42.233 "digest": "sha512", 00:21:42.233 "dhgroup": "ffdhe2048" 00:21:42.233 } 00:21:42.233 } 00:21:42.233 ]' 00:21:42.233 09:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:42.233 09:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:42.233 09:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:42.233 09:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:42.233 09:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:42.233 09:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.233 09:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.233 09:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.490 09:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NzQ1YjdmZDVmOTNlNTAxZTM0YTAzMGFmZDc4MTY5ZmM1N2MxNGZjMmRlMmVhZWEwIyEwPg==: --dhchap-ctrl-secret DHHC-1:03:ZTNhNmFiYTIyNmNjODY3YTdkMDYyNWMwMGIyMGE0NzdjNDM2YzJlZjE5YzA0ZDg1MWZkNzQwZTVjYTNjZjVhZBT0aQU=: 00:21:43.422 09:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.422 09:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:43.423 09:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.423 09:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.423 09:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.423 09:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:43.423 09:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:43.423 09:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:43.680 09:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:21:43.680 09:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:43.680 09:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:43.680 09:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:43.680 09:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:43.680 09:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.680 09:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.680 09:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.680 09:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.680 09:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.680 09:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.680 09:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.937 00:21:43.937 09:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:43.937 09:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.937 09:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:44.195 09:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.195 09:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.195 09:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.195 09:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.195 09:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.195 09:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:44.195 { 00:21:44.195 "cntlid": 107, 00:21:44.195 "qid": 0, 00:21:44.195 "state": "enabled", 00:21:44.195 "thread": "nvmf_tgt_poll_group_000", 00:21:44.195 "listen_address": { 00:21:44.195 "trtype": "TCP", 00:21:44.195 "adrfam": "IPv4", 00:21:44.195 "traddr": "10.0.0.2", 00:21:44.195 "trsvcid": "4420" 00:21:44.195 }, 00:21:44.195 "peer_address": { 00:21:44.195 "trtype": "TCP", 00:21:44.195 "adrfam": "IPv4", 00:21:44.195 "traddr": "10.0.0.1", 00:21:44.195 "trsvcid": "42908" 00:21:44.195 }, 00:21:44.195 "auth": { 00:21:44.195 "state": "completed", 00:21:44.195 "digest": "sha512", 00:21:44.195 "dhgroup": "ffdhe2048" 00:21:44.195 } 00:21:44.195 } 00:21:44.195 ]' 00:21:44.195 09:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:44.453 09:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:44.453 09:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:44.453 09:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:44.453 09:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:44.453 09:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.453 09:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.453 09:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.710 09:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MGZjM2VhZDFmMDg1OTVlODc3OWIwODUwZjNkOWIwYzNGi3HN: --dhchap-ctrl-secret DHHC-1:02:YzQxZmU0MGQ3ZjQ4MGNiNzVhNjcwNWU4MTcwZmZmZGQwNDIyYTcwMWE3ZjFkYTVjZh8NIw==: 00:21:45.646 09:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.646 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.646 09:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:45.646 09:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.646 09:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.646 09:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.646 09:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:45.646 09:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:45.646 09:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:45.904 09:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:21:45.904 09:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:45.904 09:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:45.904 09:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:45.904 09:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:45.904 09:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.904 09:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.904 09:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.904 09:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.904 09:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.904 09:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.904 09:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.162 00:21:46.162 09:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:46.162 09:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:46.162 09:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.420 09:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.420 09:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.420 09:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.420 09:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.420 09:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.420 09:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:46.420 { 00:21:46.420 "cntlid": 109, 00:21:46.420 "qid": 0, 00:21:46.420 "state": "enabled", 00:21:46.420 "thread": "nvmf_tgt_poll_group_000", 00:21:46.420 "listen_address": { 00:21:46.420 "trtype": "TCP", 00:21:46.420 "adrfam": "IPv4", 00:21:46.420 "traddr": "10.0.0.2", 00:21:46.420 "trsvcid": "4420" 00:21:46.420 }, 00:21:46.420 "peer_address": { 00:21:46.420 "trtype": "TCP", 00:21:46.420 "adrfam": "IPv4", 00:21:46.420 "traddr": "10.0.0.1", 00:21:46.420 "trsvcid": "42934" 00:21:46.420 }, 00:21:46.420 "auth": { 00:21:46.420 "state": "completed", 00:21:46.420 "digest": "sha512", 00:21:46.420 "dhgroup": "ffdhe2048" 00:21:46.420 } 00:21:46.420 } 00:21:46.420 ]' 00:21:46.420 09:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:46.420 09:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:46.420 09:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:46.679 09:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:46.679 09:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:46.679 09:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.679 09:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.679 09:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.937 09:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDE5N2YzOTY4OTNjMzgwYTM4MmQ0MzdkYWMzMTc1NWNlMzQzOGNhZWM3ZmJkMmJkcIhjtQ==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NTllNDE4M2ZkMjY5YTBhMTE4NTYwZDhkMzI0NziXBON7: 00:21:47.879 09:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.879 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.879 09:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:47.879 09:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.879 09:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.879 09:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.879 09:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:47.880 09:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:47.880 09:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:48.137 09:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:21:48.137 09:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:48.137 09:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:48.137 09:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:48.137 09:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:48.137 09:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.137 09:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:48.137 09:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.137 09:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.137 09:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.137 09:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:48.137 09:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:48.395 00:21:48.395 09:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:48.395 09:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.395 09:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:48.652 09:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.652 09:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.652 09:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.652 09:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.652 09:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.652 09:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:48.652 { 00:21:48.652 "cntlid": 111, 00:21:48.652 "qid": 0, 00:21:48.652 "state": "enabled", 00:21:48.652 "thread": "nvmf_tgt_poll_group_000", 00:21:48.652 "listen_address": { 00:21:48.652 "trtype": "TCP", 00:21:48.652 "adrfam": "IPv4", 00:21:48.652 "traddr": "10.0.0.2", 00:21:48.652 "trsvcid": "4420" 00:21:48.652 }, 00:21:48.652 "peer_address": { 00:21:48.652 "trtype": "TCP", 00:21:48.652 "adrfam": "IPv4", 00:21:48.652 "traddr": "10.0.0.1", 00:21:48.652 "trsvcid": "42948" 00:21:48.652 }, 00:21:48.652 "auth": { 00:21:48.652 "state": "completed", 00:21:48.652 "digest": "sha512", 00:21:48.652 "dhgroup": "ffdhe2048" 00:21:48.652 } 00:21:48.652 } 00:21:48.652 ]' 00:21:48.652 09:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:48.652 09:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:48.652 09:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:48.652 09:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:48.652 09:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:48.652 09:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.652 09:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.652 09:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.910 09:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZDc0MDkzMjBjZjhlOTY1N2VlY2E2N2E5MGYxZDFhODliMWFjOTcyMzI2MzQyZDZmYmE2ODcxYzhlZDM1OTE4NgHa9ec=: 00:21:50.279 09:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.279 09:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:50.279 09:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.279 09:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.279 09:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.279 09:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:50.279 09:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:50.279 09:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:50.279 09:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:50.279 09:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:21:50.279 09:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:50.279 09:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:50.279 09:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:50.279 09:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:50.279 09:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.279 09:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.279 09:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.279 09:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.279 09:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.279 09:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.279 09:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.535 00:21:50.535 09:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:50.535 09:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:50.536 09:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.792 09:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.792 09:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.792 09:31:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.792 09:31:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.792 09:31:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.792 09:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:50.792 { 00:21:50.792 "cntlid": 113, 00:21:50.792 "qid": 0, 00:21:50.792 "state": "enabled", 00:21:50.792 "thread": "nvmf_tgt_poll_group_000", 00:21:50.792 "listen_address": { 00:21:50.792 "trtype": "TCP", 00:21:50.792 "adrfam": "IPv4", 00:21:50.792 "traddr": "10.0.0.2", 00:21:50.792 "trsvcid": "4420" 00:21:50.792 }, 00:21:50.792 "peer_address": { 00:21:50.792 "trtype": "TCP", 00:21:50.792 "adrfam": "IPv4", 00:21:50.792 "traddr": "10.0.0.1", 00:21:50.792 "trsvcid": "42966" 00:21:50.792 }, 00:21:50.792 "auth": { 00:21:50.792 "state": "completed", 00:21:50.792 "digest": "sha512", 00:21:50.792 "dhgroup": "ffdhe3072" 00:21:50.792 } 00:21:50.792 } 00:21:50.792 ]' 00:21:50.792 09:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:50.792 09:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:50.792 09:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:51.049 09:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:51.049 09:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:51.049 09:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.049 09:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.049 09:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.306 09:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NzQ1YjdmZDVmOTNlNTAxZTM0YTAzMGFmZDc4MTY5ZmM1N2MxNGZjMmRlMmVhZWEwIyEwPg==: --dhchap-ctrl-secret DHHC-1:03:ZTNhNmFiYTIyNmNjODY3YTdkMDYyNWMwMGIyMGE0NzdjNDM2YzJlZjE5YzA0ZDg1MWZkNzQwZTVjYTNjZjVhZBT0aQU=: 00:21:52.239 09:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.239 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.239 09:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:52.239 09:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.239 09:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.239 09:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.239 09:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:52.239 09:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:52.239 09:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:52.498 09:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:21:52.498 09:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:52.498 09:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:52.498 09:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:52.498 09:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:52.498 09:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.498 09:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.498 09:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.498 09:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.498 09:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.498 09:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.498 09:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.756 00:21:52.756 09:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:52.756 09:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:52.756 09:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.014 09:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.014 09:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.014 09:31:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.014 09:31:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.014 09:31:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.014 09:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:53.014 { 00:21:53.014 "cntlid": 115, 00:21:53.014 "qid": 0, 00:21:53.014 "state": "enabled", 00:21:53.014 "thread": "nvmf_tgt_poll_group_000", 00:21:53.014 "listen_address": { 00:21:53.014 "trtype": "TCP", 00:21:53.014 "adrfam": "IPv4", 00:21:53.014 "traddr": "10.0.0.2", 00:21:53.014 "trsvcid": "4420" 00:21:53.014 }, 00:21:53.014 "peer_address": { 00:21:53.014 "trtype": "TCP", 00:21:53.014 "adrfam": "IPv4", 00:21:53.014 "traddr": "10.0.0.1", 00:21:53.014 "trsvcid": "42996" 00:21:53.014 }, 00:21:53.014 "auth": { 00:21:53.014 "state": "completed", 00:21:53.014 "digest": "sha512", 00:21:53.014 "dhgroup": "ffdhe3072" 00:21:53.014 } 00:21:53.014 } 00:21:53.014 ]' 00:21:53.014 09:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:53.014 09:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:53.014 09:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:53.014 09:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:53.014 09:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:53.272 09:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.272 09:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.272 09:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.559 09:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MGZjM2VhZDFmMDg1OTVlODc3OWIwODUwZjNkOWIwYzNGi3HN: --dhchap-ctrl-secret DHHC-1:02:YzQxZmU0MGQ3ZjQ4MGNiNzVhNjcwNWU4MTcwZmZmZGQwNDIyYTcwMWE3ZjFkYTVjZh8NIw==: 00:21:54.497 09:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.497 09:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:54.497 09:31:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.497 09:31:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.497 09:31:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.497 09:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:54.497 09:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:54.497 09:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:54.755 09:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:21:54.755 09:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:54.755 09:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:54.755 09:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:54.755 09:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:54.755 09:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.755 09:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.755 09:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.755 09:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.755 09:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.755 09:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.755 09:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:55.013 00:21:55.013 09:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:55.013 09:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.013 09:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:55.271 09:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.271 09:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.271 09:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.271 09:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.271 09:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.271 09:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:55.271 { 00:21:55.271 "cntlid": 117, 00:21:55.271 "qid": 0, 00:21:55.271 "state": "enabled", 00:21:55.271 "thread": "nvmf_tgt_poll_group_000", 00:21:55.271 "listen_address": { 00:21:55.271 "trtype": "TCP", 00:21:55.271 "adrfam": "IPv4", 00:21:55.271 "traddr": "10.0.0.2", 00:21:55.271 "trsvcid": "4420" 00:21:55.271 }, 00:21:55.271 "peer_address": { 00:21:55.271 "trtype": "TCP", 00:21:55.271 "adrfam": "IPv4", 00:21:55.271 "traddr": "10.0.0.1", 00:21:55.271 "trsvcid": "52158" 00:21:55.271 }, 00:21:55.271 "auth": { 00:21:55.271 "state": "completed", 00:21:55.271 "digest": "sha512", 00:21:55.271 "dhgroup": "ffdhe3072" 00:21:55.271 } 00:21:55.271 } 00:21:55.271 ]' 00:21:55.271 09:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:55.271 09:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:55.271 09:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:55.271 09:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:55.271 09:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:55.271 09:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.271 09:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.271 09:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.529 09:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDE5N2YzOTY4OTNjMzgwYTM4MmQ0MzdkYWMzMTc1NWNlMzQzOGNhZWM3ZmJkMmJkcIhjtQ==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NTllNDE4M2ZkMjY5YTBhMTE4NTYwZDhkMzI0NziXBON7: 00:21:56.463 09:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.463 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.463 09:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:56.463 09:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.463 09:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.463 09:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.463 09:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:56.463 09:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:56.463 09:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:57.029 09:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:21:57.029 09:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:57.029 09:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:57.029 09:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:57.029 09:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:57.029 09:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.029 09:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:57.029 09:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.029 09:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.029 09:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.029 09:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:57.029 09:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:57.287 00:21:57.287 09:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:57.287 09:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:57.287 09:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.545 09:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.545 09:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.545 09:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.545 09:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.545 09:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.545 09:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:57.545 { 00:21:57.545 "cntlid": 119, 00:21:57.545 "qid": 0, 00:21:57.545 "state": "enabled", 00:21:57.545 "thread": "nvmf_tgt_poll_group_000", 00:21:57.545 "listen_address": { 00:21:57.545 "trtype": "TCP", 00:21:57.545 "adrfam": "IPv4", 00:21:57.545 "traddr": "10.0.0.2", 00:21:57.545 "trsvcid": "4420" 00:21:57.545 }, 00:21:57.545 "peer_address": { 00:21:57.545 "trtype": "TCP", 00:21:57.545 "adrfam": "IPv4", 00:21:57.545 "traddr": "10.0.0.1", 00:21:57.545 "trsvcid": "52186" 00:21:57.545 }, 00:21:57.545 "auth": { 00:21:57.545 "state": "completed", 00:21:57.545 "digest": "sha512", 00:21:57.545 "dhgroup": "ffdhe3072" 00:21:57.545 } 00:21:57.545 } 00:21:57.545 ]' 00:21:57.545 09:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:57.545 09:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:57.545 09:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:57.545 09:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:57.545 09:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:57.545 09:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.545 09:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.545 09:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.803 09:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZDc0MDkzMjBjZjhlOTY1N2VlY2E2N2E5MGYxZDFhODliMWFjOTcyMzI2MzQyZDZmYmE2ODcxYzhlZDM1OTE4NgHa9ec=: 00:21:58.737 09:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.737 09:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:58.737 09:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.737 09:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.737 09:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.737 09:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:58.737 09:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:58.737 09:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:58.737 09:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:58.994 09:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:21:58.994 09:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:58.994 09:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:58.994 09:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:58.994 09:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:58.994 09:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.994 09:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:58.994 09:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.994 09:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.994 09:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.994 09:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:58.994 09:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.558 00:21:59.558 09:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:59.558 09:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:59.558 09:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.815 09:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.815 09:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.815 09:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.815 09:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.815 09:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.815 09:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:59.815 { 00:21:59.815 "cntlid": 121, 00:21:59.815 "qid": 0, 00:21:59.815 "state": "enabled", 00:21:59.815 "thread": "nvmf_tgt_poll_group_000", 00:21:59.815 "listen_address": { 00:21:59.815 "trtype": "TCP", 00:21:59.815 "adrfam": "IPv4", 00:21:59.815 "traddr": "10.0.0.2", 00:21:59.815 "trsvcid": "4420" 00:21:59.815 }, 00:21:59.815 "peer_address": { 00:21:59.815 "trtype": "TCP", 00:21:59.815 "adrfam": "IPv4", 00:21:59.815 "traddr": "10.0.0.1", 00:21:59.815 "trsvcid": "52216" 00:21:59.815 }, 00:21:59.815 "auth": { 00:21:59.815 "state": "completed", 00:21:59.815 "digest": "sha512", 00:21:59.815 "dhgroup": "ffdhe4096" 00:21:59.815 } 00:21:59.815 } 00:21:59.815 ]' 00:21:59.815 09:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:59.815 09:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:59.815 09:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:59.815 09:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:59.815 09:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:59.815 09:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.815 09:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.815 09:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.072 09:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NzQ1YjdmZDVmOTNlNTAxZTM0YTAzMGFmZDc4MTY5ZmM1N2MxNGZjMmRlMmVhZWEwIyEwPg==: --dhchap-ctrl-secret DHHC-1:03:ZTNhNmFiYTIyNmNjODY3YTdkMDYyNWMwMGIyMGE0NzdjNDM2YzJlZjE5YzA0ZDg1MWZkNzQwZTVjYTNjZjVhZBT0aQU=: 00:22:01.003 09:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.003 09:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:01.003 09:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.003 09:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.003 09:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.003 09:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:01.003 09:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:01.003 09:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:01.261 09:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:22:01.261 09:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:01.261 09:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:01.261 09:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:01.261 09:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:01.261 09:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:01.261 09:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.261 09:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.261 09:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.261 09:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.261 09:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.261 09:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.825 00:22:01.825 09:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:01.825 09:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.825 09:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:02.083 09:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.083 09:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.083 09:31:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.083 09:31:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.083 09:31:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.083 09:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:02.083 { 00:22:02.083 "cntlid": 123, 00:22:02.083 "qid": 0, 00:22:02.083 "state": "enabled", 00:22:02.083 "thread": "nvmf_tgt_poll_group_000", 00:22:02.083 "listen_address": { 00:22:02.083 "trtype": "TCP", 00:22:02.083 "adrfam": "IPv4", 00:22:02.083 "traddr": "10.0.0.2", 00:22:02.083 "trsvcid": "4420" 00:22:02.084 }, 00:22:02.084 "peer_address": { 00:22:02.084 "trtype": "TCP", 00:22:02.084 "adrfam": "IPv4", 00:22:02.084 "traddr": "10.0.0.1", 00:22:02.084 "trsvcid": "52238" 00:22:02.084 }, 00:22:02.084 "auth": { 00:22:02.084 "state": "completed", 00:22:02.084 "digest": "sha512", 00:22:02.084 "dhgroup": "ffdhe4096" 00:22:02.084 } 00:22:02.084 } 00:22:02.084 ]' 00:22:02.084 09:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:02.084 09:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:02.084 09:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:02.084 09:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:02.084 09:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:02.084 09:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.084 09:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.084 09:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.342 09:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MGZjM2VhZDFmMDg1OTVlODc3OWIwODUwZjNkOWIwYzNGi3HN: --dhchap-ctrl-secret DHHC-1:02:YzQxZmU0MGQ3ZjQ4MGNiNzVhNjcwNWU4MTcwZmZmZGQwNDIyYTcwMWE3ZjFkYTVjZh8NIw==: 00:22:03.275 09:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.275 09:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:03.275 09:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.275 09:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.275 09:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.275 09:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:03.275 09:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:03.275 09:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:03.533 09:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:22:03.533 09:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:03.533 09:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:03.533 09:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:03.533 09:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:03.533 09:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:03.533 09:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:03.533 09:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.533 09:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.533 09:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.533 09:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:03.533 09:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.099 00:22:04.099 09:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:04.099 09:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:04.099 09:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.357 09:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.357 09:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.357 09:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.357 09:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.357 09:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.357 09:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:04.357 { 00:22:04.357 "cntlid": 125, 00:22:04.357 "qid": 0, 00:22:04.357 "state": "enabled", 00:22:04.357 "thread": "nvmf_tgt_poll_group_000", 00:22:04.357 "listen_address": { 00:22:04.357 "trtype": "TCP", 00:22:04.357 "adrfam": "IPv4", 00:22:04.357 "traddr": "10.0.0.2", 00:22:04.357 "trsvcid": "4420" 00:22:04.357 }, 00:22:04.357 "peer_address": { 00:22:04.357 "trtype": "TCP", 00:22:04.357 "adrfam": "IPv4", 00:22:04.357 "traddr": "10.0.0.1", 00:22:04.357 "trsvcid": "38482" 00:22:04.357 }, 00:22:04.357 "auth": { 00:22:04.357 "state": "completed", 00:22:04.357 "digest": "sha512", 00:22:04.357 "dhgroup": "ffdhe4096" 00:22:04.357 } 00:22:04.357 } 00:22:04.357 ]' 00:22:04.357 09:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:04.357 09:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:04.357 09:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:04.357 09:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:04.357 09:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:04.357 09:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.357 09:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.357 09:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.614 09:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDE5N2YzOTY4OTNjMzgwYTM4MmQ0MzdkYWMzMTc1NWNlMzQzOGNhZWM3ZmJkMmJkcIhjtQ==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NTllNDE4M2ZkMjY5YTBhMTE4NTYwZDhkMzI0NziXBON7: 00:22:05.547 09:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.547 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.547 09:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:05.547 09:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.547 09:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.547 09:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.547 09:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:05.547 09:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:05.547 09:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:05.804 09:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:22:05.804 09:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:05.804 09:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:05.804 09:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:05.804 09:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:05.804 09:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.804 09:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:05.804 09:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.804 09:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.804 09:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.805 09:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:05.805 09:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:06.383 00:22:06.383 09:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:06.383 09:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:06.383 09:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.641 09:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.641 09:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.641 09:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.641 09:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.641 09:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.641 09:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:06.641 { 00:22:06.641 "cntlid": 127, 00:22:06.641 "qid": 0, 00:22:06.641 "state": "enabled", 00:22:06.641 "thread": "nvmf_tgt_poll_group_000", 00:22:06.641 "listen_address": { 00:22:06.641 "trtype": "TCP", 00:22:06.642 "adrfam": "IPv4", 00:22:06.642 "traddr": "10.0.0.2", 00:22:06.642 "trsvcid": "4420" 00:22:06.642 }, 00:22:06.642 "peer_address": { 00:22:06.642 "trtype": "TCP", 00:22:06.642 "adrfam": "IPv4", 00:22:06.642 "traddr": "10.0.0.1", 00:22:06.642 "trsvcid": "38500" 00:22:06.642 }, 00:22:06.642 "auth": { 00:22:06.642 "state": "completed", 00:22:06.642 "digest": "sha512", 00:22:06.642 "dhgroup": "ffdhe4096" 00:22:06.642 } 00:22:06.642 } 00:22:06.642 ]' 00:22:06.642 09:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:06.642 09:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:06.642 09:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:06.642 09:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:06.642 09:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:06.642 09:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.642 09:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.642 09:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.899 09:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZDc0MDkzMjBjZjhlOTY1N2VlY2E2N2E5MGYxZDFhODliMWFjOTcyMzI2MzQyZDZmYmE2ODcxYzhlZDM1OTE4NgHa9ec=: 00:22:07.828 09:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.828 09:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:07.828 09:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.828 09:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.828 09:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.828 09:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:07.828 09:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:07.828 09:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:07.828 09:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:08.085 09:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:22:08.085 09:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:08.085 09:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:08.085 09:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:08.085 09:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:08.085 09:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:08.085 09:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.085 09:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.085 09:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.085 09:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.085 09:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.085 09:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.650 00:22:08.650 09:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:08.650 09:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.650 09:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:08.909 09:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.909 09:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.909 09:31:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.909 09:31:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.909 09:31:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.909 09:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:08.909 { 00:22:08.909 "cntlid": 129, 00:22:08.909 "qid": 0, 00:22:08.909 "state": "enabled", 00:22:08.909 "thread": "nvmf_tgt_poll_group_000", 00:22:08.909 "listen_address": { 00:22:08.909 "trtype": "TCP", 00:22:08.909 "adrfam": "IPv4", 00:22:08.909 "traddr": "10.0.0.2", 00:22:08.909 "trsvcid": "4420" 00:22:08.909 }, 00:22:08.909 "peer_address": { 00:22:08.909 "trtype": "TCP", 00:22:08.909 "adrfam": "IPv4", 00:22:08.909 "traddr": "10.0.0.1", 00:22:08.909 "trsvcid": "38532" 00:22:08.909 }, 00:22:08.909 "auth": { 00:22:08.909 "state": "completed", 00:22:08.909 "digest": "sha512", 00:22:08.909 "dhgroup": "ffdhe6144" 00:22:08.909 } 00:22:08.909 } 00:22:08.909 ]' 00:22:08.909 09:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:08.909 09:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:08.909 09:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:08.909 09:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:08.909 09:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:09.167 09:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:09.167 09:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.167 09:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.424 09:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NzQ1YjdmZDVmOTNlNTAxZTM0YTAzMGFmZDc4MTY5ZmM1N2MxNGZjMmRlMmVhZWEwIyEwPg==: --dhchap-ctrl-secret DHHC-1:03:ZTNhNmFiYTIyNmNjODY3YTdkMDYyNWMwMGIyMGE0NzdjNDM2YzJlZjE5YzA0ZDg1MWZkNzQwZTVjYTNjZjVhZBT0aQU=: 00:22:10.365 09:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:10.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:10.365 09:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:10.365 09:31:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.365 09:31:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.365 09:31:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.365 09:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:10.365 09:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:10.365 09:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:10.627 09:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:22:10.627 09:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:10.627 09:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:10.627 09:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:10.627 09:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:10.627 09:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:10.627 09:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.627 09:31:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.627 09:31:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.627 09:31:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.627 09:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.627 09:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:11.191 00:22:11.191 09:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:11.191 09:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:11.191 09:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.449 09:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.449 09:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.449 09:31:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.449 09:31:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.449 09:31:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.449 09:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:11.449 { 00:22:11.449 "cntlid": 131, 00:22:11.449 "qid": 0, 00:22:11.449 "state": "enabled", 00:22:11.449 "thread": "nvmf_tgt_poll_group_000", 00:22:11.449 "listen_address": { 00:22:11.449 "trtype": "TCP", 00:22:11.449 "adrfam": "IPv4", 00:22:11.449 "traddr": "10.0.0.2", 00:22:11.449 "trsvcid": "4420" 00:22:11.449 }, 00:22:11.449 "peer_address": { 00:22:11.449 "trtype": "TCP", 00:22:11.449 "adrfam": "IPv4", 00:22:11.449 "traddr": "10.0.0.1", 00:22:11.449 "trsvcid": "38558" 00:22:11.449 }, 00:22:11.449 "auth": { 00:22:11.449 "state": "completed", 00:22:11.449 "digest": "sha512", 00:22:11.449 "dhgroup": "ffdhe6144" 00:22:11.449 } 00:22:11.449 } 00:22:11.449 ]' 00:22:11.449 09:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:11.449 09:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:11.449 09:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:11.449 09:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:11.449 09:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:11.449 09:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:11.449 09:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.449 09:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.707 09:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MGZjM2VhZDFmMDg1OTVlODc3OWIwODUwZjNkOWIwYzNGi3HN: --dhchap-ctrl-secret DHHC-1:02:YzQxZmU0MGQ3ZjQ4MGNiNzVhNjcwNWU4MTcwZmZmZGQwNDIyYTcwMWE3ZjFkYTVjZh8NIw==: 00:22:12.641 09:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.641 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.641 09:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:12.641 09:31:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.641 09:31:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.641 09:31:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.641 09:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:12.641 09:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:12.641 09:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:12.899 09:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:22:12.899 09:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:12.899 09:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:12.899 09:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:12.899 09:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:12.899 09:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:12.899 09:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:12.899 09:31:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.899 09:31:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.899 09:31:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.899 09:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:12.899 09:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:13.464 00:22:13.464 09:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:13.464 09:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:13.464 09:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.722 09:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.722 09:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:13.722 09:31:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.722 09:31:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.722 09:31:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.722 09:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:13.722 { 00:22:13.722 "cntlid": 133, 00:22:13.722 "qid": 0, 00:22:13.722 "state": "enabled", 00:22:13.722 "thread": "nvmf_tgt_poll_group_000", 00:22:13.722 "listen_address": { 00:22:13.722 "trtype": "TCP", 00:22:13.722 "adrfam": "IPv4", 00:22:13.722 "traddr": "10.0.0.2", 00:22:13.722 "trsvcid": "4420" 00:22:13.722 }, 00:22:13.722 "peer_address": { 00:22:13.722 "trtype": "TCP", 00:22:13.722 "adrfam": "IPv4", 00:22:13.722 "traddr": "10.0.0.1", 00:22:13.722 "trsvcid": "38592" 00:22:13.722 }, 00:22:13.722 "auth": { 00:22:13.722 "state": "completed", 00:22:13.722 "digest": "sha512", 00:22:13.722 "dhgroup": "ffdhe6144" 00:22:13.722 } 00:22:13.722 } 00:22:13.722 ]' 00:22:13.722 09:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:13.722 09:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:13.722 09:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:13.980 09:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:13.980 09:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:13.980 09:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.980 09:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.980 09:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:14.238 09:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDE5N2YzOTY4OTNjMzgwYTM4MmQ0MzdkYWMzMTc1NWNlMzQzOGNhZWM3ZmJkMmJkcIhjtQ==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NTllNDE4M2ZkMjY5YTBhMTE4NTYwZDhkMzI0NziXBON7: 00:22:15.172 09:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:15.172 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:15.172 09:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:15.172 09:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.172 09:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.172 09:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.172 09:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:15.172 09:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:15.172 09:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:15.430 09:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:22:15.430 09:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:15.430 09:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:15.430 09:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:15.430 09:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:15.430 09:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:15.430 09:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:15.430 09:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.430 09:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.430 09:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.430 09:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:15.430 09:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:15.996 00:22:15.996 09:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:15.996 09:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:15.996 09:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.253 09:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.253 09:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:16.253 09:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.253 09:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.253 09:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.253 09:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:16.253 { 00:22:16.253 "cntlid": 135, 00:22:16.253 "qid": 0, 00:22:16.253 "state": "enabled", 00:22:16.253 "thread": "nvmf_tgt_poll_group_000", 00:22:16.253 "listen_address": { 00:22:16.253 "trtype": "TCP", 00:22:16.253 "adrfam": "IPv4", 00:22:16.253 "traddr": "10.0.0.2", 00:22:16.253 "trsvcid": "4420" 00:22:16.253 }, 00:22:16.253 "peer_address": { 00:22:16.253 "trtype": "TCP", 00:22:16.253 "adrfam": "IPv4", 00:22:16.253 "traddr": "10.0.0.1", 00:22:16.253 "trsvcid": "33986" 00:22:16.253 }, 00:22:16.253 "auth": { 00:22:16.253 "state": "completed", 00:22:16.253 "digest": "sha512", 00:22:16.253 "dhgroup": "ffdhe6144" 00:22:16.253 } 00:22:16.253 } 00:22:16.253 ]' 00:22:16.253 09:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:16.253 09:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:16.253 09:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:16.253 09:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:16.253 09:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:16.512 09:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:16.512 09:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:16.512 09:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.770 09:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZDc0MDkzMjBjZjhlOTY1N2VlY2E2N2E5MGYxZDFhODliMWFjOTcyMzI2MzQyZDZmYmE2ODcxYzhlZDM1OTE4NgHa9ec=: 00:22:17.704 09:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:17.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:17.704 09:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:17.704 09:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.704 09:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.704 09:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.704 09:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:17.704 09:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:17.704 09:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:17.704 09:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:17.962 09:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:22:17.962 09:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:17.962 09:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:17.962 09:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:17.962 09:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:17.962 09:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:17.962 09:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:17.962 09:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.962 09:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.962 09:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.962 09:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:17.962 09:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:18.958 00:22:18.958 09:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:18.958 09:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.958 09:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:19.214 09:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.214 09:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:19.214 09:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.214 09:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.214 09:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.214 09:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:19.214 { 00:22:19.214 "cntlid": 137, 00:22:19.214 "qid": 0, 00:22:19.214 "state": "enabled", 00:22:19.214 "thread": "nvmf_tgt_poll_group_000", 00:22:19.214 "listen_address": { 00:22:19.214 "trtype": "TCP", 00:22:19.214 "adrfam": "IPv4", 00:22:19.214 "traddr": "10.0.0.2", 00:22:19.214 "trsvcid": "4420" 00:22:19.214 }, 00:22:19.214 "peer_address": { 00:22:19.214 "trtype": "TCP", 00:22:19.214 "adrfam": "IPv4", 00:22:19.214 "traddr": "10.0.0.1", 00:22:19.214 "trsvcid": "34004" 00:22:19.214 }, 00:22:19.214 "auth": { 00:22:19.214 "state": "completed", 00:22:19.214 "digest": "sha512", 00:22:19.214 "dhgroup": "ffdhe8192" 00:22:19.214 } 00:22:19.214 } 00:22:19.214 ]' 00:22:19.214 09:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:19.214 09:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:19.214 09:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:19.214 09:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:19.214 09:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:19.214 09:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:19.214 09:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.214 09:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.471 09:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NzQ1YjdmZDVmOTNlNTAxZTM0YTAzMGFmZDc4MTY5ZmM1N2MxNGZjMmRlMmVhZWEwIyEwPg==: --dhchap-ctrl-secret DHHC-1:03:ZTNhNmFiYTIyNmNjODY3YTdkMDYyNWMwMGIyMGE0NzdjNDM2YzJlZjE5YzA0ZDg1MWZkNzQwZTVjYTNjZjVhZBT0aQU=: 00:22:20.402 09:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:20.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:20.402 09:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:20.402 09:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.402 09:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.660 09:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.660 09:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:20.660 09:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:20.660 09:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:20.660 09:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:22:20.660 09:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:20.660 09:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:20.660 09:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:20.660 09:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:20.660 09:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:20.660 09:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.660 09:32:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.660 09:32:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.660 09:32:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.660 09:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.660 09:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:21.593 00:22:21.593 09:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:21.593 09:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:21.593 09:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.851 09:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.851 09:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:21.851 09:32:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.851 09:32:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.852 09:32:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.852 09:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:21.852 { 00:22:21.852 "cntlid": 139, 00:22:21.852 "qid": 0, 00:22:21.852 "state": "enabled", 00:22:21.852 "thread": "nvmf_tgt_poll_group_000", 00:22:21.852 "listen_address": { 00:22:21.852 "trtype": "TCP", 00:22:21.852 "adrfam": "IPv4", 00:22:21.852 "traddr": "10.0.0.2", 00:22:21.852 "trsvcid": "4420" 00:22:21.852 }, 00:22:21.852 "peer_address": { 00:22:21.852 "trtype": "TCP", 00:22:21.852 "adrfam": "IPv4", 00:22:21.852 "traddr": "10.0.0.1", 00:22:21.852 "trsvcid": "34018" 00:22:21.852 }, 00:22:21.852 "auth": { 00:22:21.852 "state": "completed", 00:22:21.852 "digest": "sha512", 00:22:21.852 "dhgroup": "ffdhe8192" 00:22:21.852 } 00:22:21.852 } 00:22:21.852 ]' 00:22:21.852 09:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:21.852 09:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:21.852 09:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:22.109 09:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:22.110 09:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:22.110 09:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:22.110 09:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:22.110 09:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:22.367 09:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MGZjM2VhZDFmMDg1OTVlODc3OWIwODUwZjNkOWIwYzNGi3HN: --dhchap-ctrl-secret DHHC-1:02:YzQxZmU0MGQ3ZjQ4MGNiNzVhNjcwNWU4MTcwZmZmZGQwNDIyYTcwMWE3ZjFkYTVjZh8NIw==: 00:22:23.301 09:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:23.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:23.301 09:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:23.301 09:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.301 09:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.301 09:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.301 09:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:23.301 09:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:23.301 09:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:23.559 09:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:22:23.559 09:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:23.559 09:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:23.559 09:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:23.559 09:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:23.559 09:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:23.559 09:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:23.559 09:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.559 09:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.559 09:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.559 09:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:23.559 09:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:24.491 00:22:24.491 09:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:24.491 09:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:24.491 09:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:24.748 09:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.748 09:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:24.748 09:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.748 09:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.748 09:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.748 09:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:24.748 { 00:22:24.748 "cntlid": 141, 00:22:24.748 "qid": 0, 00:22:24.748 "state": "enabled", 00:22:24.748 "thread": "nvmf_tgt_poll_group_000", 00:22:24.748 "listen_address": { 00:22:24.748 "trtype": "TCP", 00:22:24.748 "adrfam": "IPv4", 00:22:24.748 "traddr": "10.0.0.2", 00:22:24.748 "trsvcid": "4420" 00:22:24.748 }, 00:22:24.748 "peer_address": { 00:22:24.748 "trtype": "TCP", 00:22:24.748 "adrfam": "IPv4", 00:22:24.748 "traddr": "10.0.0.1", 00:22:24.748 "trsvcid": "37332" 00:22:24.748 }, 00:22:24.748 "auth": { 00:22:24.748 "state": "completed", 00:22:24.748 "digest": "sha512", 00:22:24.748 "dhgroup": "ffdhe8192" 00:22:24.748 } 00:22:24.748 } 00:22:24.748 ]' 00:22:24.748 09:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:24.748 09:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:24.748 09:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:24.748 09:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:24.748 09:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:24.748 09:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:24.748 09:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:24.748 09:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:25.310 09:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZDE5N2YzOTY4OTNjMzgwYTM4MmQ0MzdkYWMzMTc1NWNlMzQzOGNhZWM3ZmJkMmJkcIhjtQ==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NTllNDE4M2ZkMjY5YTBhMTE4NTYwZDhkMzI0NziXBON7: 00:22:26.241 09:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:26.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:26.241 09:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:26.241 09:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.241 09:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.241 09:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.241 09:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:26.241 09:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:26.241 09:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:26.498 09:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:22:26.498 09:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:26.498 09:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:26.498 09:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:26.498 09:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:26.498 09:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:26.498 09:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:26.498 09:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.498 09:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.498 09:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.498 09:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:26.498 09:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:27.431 00:22:27.431 09:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:27.431 09:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:27.431 09:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:27.431 09:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.431 09:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:27.431 09:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.431 09:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.689 09:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.689 09:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:27.689 { 00:22:27.689 "cntlid": 143, 00:22:27.689 "qid": 0, 00:22:27.689 "state": "enabled", 00:22:27.689 "thread": "nvmf_tgt_poll_group_000", 00:22:27.689 "listen_address": { 00:22:27.689 "trtype": "TCP", 00:22:27.689 "adrfam": "IPv4", 00:22:27.689 "traddr": "10.0.0.2", 00:22:27.689 "trsvcid": "4420" 00:22:27.689 }, 00:22:27.689 "peer_address": { 00:22:27.689 "trtype": "TCP", 00:22:27.689 "adrfam": "IPv4", 00:22:27.689 "traddr": "10.0.0.1", 00:22:27.689 "trsvcid": "37350" 00:22:27.689 }, 00:22:27.689 "auth": { 00:22:27.689 "state": "completed", 00:22:27.689 "digest": "sha512", 00:22:27.689 "dhgroup": "ffdhe8192" 00:22:27.689 } 00:22:27.689 } 00:22:27.689 ]' 00:22:27.689 09:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:27.689 09:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:27.689 09:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:27.689 09:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:27.689 09:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:27.689 09:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:27.689 09:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:27.689 09:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:27.947 09:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZDc0MDkzMjBjZjhlOTY1N2VlY2E2N2E5MGYxZDFhODliMWFjOTcyMzI2MzQyZDZmYmE2ODcxYzhlZDM1OTE4NgHa9ec=: 00:22:28.891 09:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:28.891 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:28.891 09:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:28.891 09:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.891 09:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.891 09:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.891 09:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:28.891 09:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:22:28.891 09:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:28.891 09:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:28.891 09:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:28.891 09:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:29.154 09:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:22:29.154 09:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:29.154 09:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:29.154 09:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:29.154 09:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:29.154 09:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:29.154 09:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:29.154 09:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.154 09:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.154 09:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.154 09:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:29.154 09:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:30.084 00:22:30.084 09:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:30.084 09:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:30.084 09:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.341 09:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.341 09:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:30.341 09:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.341 09:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.341 09:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.341 09:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:30.341 { 00:22:30.341 "cntlid": 145, 00:22:30.341 "qid": 0, 00:22:30.341 "state": "enabled", 00:22:30.341 "thread": "nvmf_tgt_poll_group_000", 00:22:30.341 "listen_address": { 00:22:30.341 "trtype": "TCP", 00:22:30.341 "adrfam": "IPv4", 00:22:30.341 "traddr": "10.0.0.2", 00:22:30.341 "trsvcid": "4420" 00:22:30.341 }, 00:22:30.341 "peer_address": { 00:22:30.341 "trtype": "TCP", 00:22:30.341 "adrfam": "IPv4", 00:22:30.341 "traddr": "10.0.0.1", 00:22:30.341 "trsvcid": "37370" 00:22:30.341 }, 00:22:30.341 "auth": { 00:22:30.341 "state": "completed", 00:22:30.341 "digest": "sha512", 00:22:30.341 "dhgroup": "ffdhe8192" 00:22:30.341 } 00:22:30.341 } 00:22:30.341 ]' 00:22:30.341 09:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:30.341 09:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:30.341 09:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:30.598 09:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:30.598 09:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:30.598 09:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:30.598 09:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:30.598 09:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:30.855 09:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NzQ1YjdmZDVmOTNlNTAxZTM0YTAzMGFmZDc4MTY5ZmM1N2MxNGZjMmRlMmVhZWEwIyEwPg==: --dhchap-ctrl-secret DHHC-1:03:ZTNhNmFiYTIyNmNjODY3YTdkMDYyNWMwMGIyMGE0NzdjNDM2YzJlZjE5YzA0ZDg1MWZkNzQwZTVjYTNjZjVhZBT0aQU=: 00:22:31.824 09:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:31.824 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:31.824 09:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:31.824 09:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.824 09:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.824 09:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.824 09:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:31.824 09:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.824 09:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.824 09:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.824 09:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:31.824 09:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:31.824 09:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:31.824 09:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:31.824 09:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:31.824 09:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:31.824 09:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:31.824 09:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:31.824 09:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:32.759 request: 00:22:32.759 { 00:22:32.759 "name": "nvme0", 00:22:32.759 "trtype": "tcp", 00:22:32.759 "traddr": "10.0.0.2", 00:22:32.759 "adrfam": "ipv4", 00:22:32.759 "trsvcid": "4420", 00:22:32.759 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:32.759 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:32.759 "prchk_reftag": false, 00:22:32.759 "prchk_guard": false, 00:22:32.759 "hdgst": false, 00:22:32.759 "ddgst": false, 00:22:32.759 "dhchap_key": "key2", 00:22:32.759 "method": "bdev_nvme_attach_controller", 00:22:32.759 "req_id": 1 00:22:32.759 } 00:22:32.759 Got JSON-RPC error response 00:22:32.759 response: 00:22:32.759 { 00:22:32.759 "code": -5, 00:22:32.759 "message": "Input/output error" 00:22:32.759 } 00:22:32.759 09:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:32.759 09:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:32.759 09:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:32.759 09:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:32.759 09:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:32.759 09:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.759 09:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.759 09:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.759 09:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:32.759 09:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.759 09:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.759 09:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.759 09:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:32.759 09:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:32.759 09:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:32.759 09:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:32.759 09:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:32.759 09:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:32.759 09:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:32.759 09:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:32.759 09:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:33.693 request: 00:22:33.693 { 00:22:33.693 "name": "nvme0", 00:22:33.693 "trtype": "tcp", 00:22:33.693 "traddr": "10.0.0.2", 00:22:33.693 "adrfam": "ipv4", 00:22:33.693 "trsvcid": "4420", 00:22:33.693 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:33.693 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:33.693 "prchk_reftag": false, 00:22:33.693 "prchk_guard": false, 00:22:33.693 "hdgst": false, 00:22:33.693 "ddgst": false, 00:22:33.693 "dhchap_key": "key1", 00:22:33.693 "dhchap_ctrlr_key": "ckey2", 00:22:33.693 "method": "bdev_nvme_attach_controller", 00:22:33.693 "req_id": 1 00:22:33.693 } 00:22:33.693 Got JSON-RPC error response 00:22:33.693 response: 00:22:33.693 { 00:22:33.693 "code": -5, 00:22:33.693 "message": "Input/output error" 00:22:33.693 } 00:22:33.693 09:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:33.693 09:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:33.693 09:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:33.693 09:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:33.693 09:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:33.693 09:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.693 09:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.693 09:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.693 09:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:33.693 09:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.693 09:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.693 09:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.693 09:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:33.693 09:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:33.693 09:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:33.693 09:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:33.693 09:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:33.693 09:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:33.693 09:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:33.693 09:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:33.693 09:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:34.260 request: 00:22:34.260 { 00:22:34.260 "name": "nvme0", 00:22:34.260 "trtype": "tcp", 00:22:34.260 "traddr": "10.0.0.2", 00:22:34.260 "adrfam": "ipv4", 00:22:34.260 "trsvcid": "4420", 00:22:34.260 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:34.260 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:34.260 "prchk_reftag": false, 00:22:34.260 "prchk_guard": false, 00:22:34.260 "hdgst": false, 00:22:34.260 "ddgst": false, 00:22:34.260 "dhchap_key": "key1", 00:22:34.260 "dhchap_ctrlr_key": "ckey1", 00:22:34.260 "method": "bdev_nvme_attach_controller", 00:22:34.260 "req_id": 1 00:22:34.260 } 00:22:34.260 Got JSON-RPC error response 00:22:34.260 response: 00:22:34.260 { 00:22:34.260 "code": -5, 00:22:34.260 "message": "Input/output error" 00:22:34.260 } 00:22:34.260 09:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:34.260 09:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:34.260 09:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:34.260 09:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:34.260 09:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:34.260 09:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.260 09:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.518 09:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.518 09:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 748530 00:22:34.518 09:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 748530 ']' 00:22:34.518 09:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 748530 00:22:34.518 09:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:22:34.518 09:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:34.518 09:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 748530 00:22:34.518 09:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:34.518 09:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:34.518 09:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 748530' 00:22:34.518 killing process with pid 748530 00:22:34.518 09:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 748530 00:22:34.518 09:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 748530 00:22:34.776 09:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:34.776 09:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:34.777 09:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:34.777 09:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.777 09:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=771124 00:22:34.777 09:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:34.777 09:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 771124 00:22:34.777 09:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 771124 ']' 00:22:34.777 09:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:34.777 09:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:34.777 09:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:34.777 09:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:34.777 09:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.035 09:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:35.035 09:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:22:35.035 09:32:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:35.035 09:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:35.035 09:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.035 09:32:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:35.035 09:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:35.035 09:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 771124 00:22:35.035 09:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 771124 ']' 00:22:35.035 09:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.035 09:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:35.035 09:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:35.035 09:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:35.035 09:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.294 09:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:35.294 09:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:22:35.294 09:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:22:35.294 09:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.294 09:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.294 09:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.294 09:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:22:35.294 09:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:35.294 09:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:35.294 09:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:35.294 09:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:35.294 09:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:35.294 09:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:35.294 09:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.294 09:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.294 09:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.294 09:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:35.294 09:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:36.229 00:22:36.229 09:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:36.229 09:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:36.229 09:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:36.487 09:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.487 09:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:36.487 09:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.487 09:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.487 09:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.487 09:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:36.487 { 00:22:36.487 "cntlid": 1, 00:22:36.487 "qid": 0, 00:22:36.487 "state": "enabled", 00:22:36.487 "thread": "nvmf_tgt_poll_group_000", 00:22:36.487 "listen_address": { 00:22:36.487 "trtype": "TCP", 00:22:36.487 "adrfam": "IPv4", 00:22:36.487 "traddr": "10.0.0.2", 00:22:36.487 "trsvcid": "4420" 00:22:36.487 }, 00:22:36.487 "peer_address": { 00:22:36.487 "trtype": "TCP", 00:22:36.487 "adrfam": "IPv4", 00:22:36.487 "traddr": "10.0.0.1", 00:22:36.487 "trsvcid": "47812" 00:22:36.487 }, 00:22:36.487 "auth": { 00:22:36.487 "state": "completed", 00:22:36.487 "digest": "sha512", 00:22:36.487 "dhgroup": "ffdhe8192" 00:22:36.487 } 00:22:36.487 } 00:22:36.487 ]' 00:22:36.487 09:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:36.745 09:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:36.745 09:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:36.745 09:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:36.745 09:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:36.745 09:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:36.745 09:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:36.745 09:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:37.004 09:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZDc0MDkzMjBjZjhlOTY1N2VlY2E2N2E5MGYxZDFhODliMWFjOTcyMzI2MzQyZDZmYmE2ODcxYzhlZDM1OTE4NgHa9ec=: 00:22:37.937 09:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:37.937 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:37.937 09:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:37.937 09:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.937 09:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.937 09:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.937 09:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:37.937 09:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.937 09:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.937 09:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.937 09:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:37.937 09:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:38.194 09:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:38.194 09:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:38.194 09:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:38.194 09:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:38.194 09:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:38.194 09:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:38.194 09:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:38.194 09:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:38.194 09:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:38.452 request: 00:22:38.452 { 00:22:38.452 "name": "nvme0", 00:22:38.452 "trtype": "tcp", 00:22:38.452 "traddr": "10.0.0.2", 00:22:38.452 "adrfam": "ipv4", 00:22:38.452 "trsvcid": "4420", 00:22:38.452 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:38.452 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:38.452 "prchk_reftag": false, 00:22:38.452 "prchk_guard": false, 00:22:38.452 "hdgst": false, 00:22:38.452 "ddgst": false, 00:22:38.452 "dhchap_key": "key3", 00:22:38.452 "method": "bdev_nvme_attach_controller", 00:22:38.452 "req_id": 1 00:22:38.452 } 00:22:38.452 Got JSON-RPC error response 00:22:38.452 response: 00:22:38.452 { 00:22:38.452 "code": -5, 00:22:38.452 "message": "Input/output error" 00:22:38.452 } 00:22:38.452 09:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:38.452 09:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:38.452 09:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:38.452 09:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:38.452 09:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:22:38.452 09:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:22:38.452 09:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:38.453 09:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:38.710 09:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:38.710 09:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:38.710 09:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:38.710 09:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:38.710 09:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:38.710 09:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:38.710 09:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:38.710 09:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:38.710 09:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:38.968 request: 00:22:38.968 { 00:22:38.968 "name": "nvme0", 00:22:38.968 "trtype": "tcp", 00:22:38.968 "traddr": "10.0.0.2", 00:22:38.968 "adrfam": "ipv4", 00:22:38.968 "trsvcid": "4420", 00:22:38.968 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:38.968 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:38.968 "prchk_reftag": false, 00:22:38.968 "prchk_guard": false, 00:22:38.968 "hdgst": false, 00:22:38.968 "ddgst": false, 00:22:38.968 "dhchap_key": "key3", 00:22:38.968 "method": "bdev_nvme_attach_controller", 00:22:38.968 "req_id": 1 00:22:38.968 } 00:22:38.968 Got JSON-RPC error response 00:22:38.968 response: 00:22:38.968 { 00:22:38.968 "code": -5, 00:22:38.968 "message": "Input/output error" 00:22:38.968 } 00:22:38.968 09:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:38.968 09:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:38.968 09:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:38.968 09:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:38.968 09:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:38.968 09:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:22:38.968 09:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:38.968 09:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:38.968 09:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:38.968 09:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:39.226 09:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:39.226 09:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.226 09:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.226 09:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.226 09:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:39.226 09:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.226 09:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.226 09:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.226 09:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:39.226 09:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:39.226 09:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:39.226 09:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:39.226 09:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:39.226 09:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:39.226 09:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:39.226 09:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:39.226 09:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:39.484 request: 00:22:39.484 { 00:22:39.484 "name": "nvme0", 00:22:39.484 "trtype": "tcp", 00:22:39.484 "traddr": "10.0.0.2", 00:22:39.484 "adrfam": "ipv4", 00:22:39.484 "trsvcid": "4420", 00:22:39.484 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:39.484 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:39.484 "prchk_reftag": false, 00:22:39.484 "prchk_guard": false, 00:22:39.484 "hdgst": false, 00:22:39.484 "ddgst": false, 00:22:39.484 "dhchap_key": "key0", 00:22:39.484 "dhchap_ctrlr_key": "key1", 00:22:39.484 "method": "bdev_nvme_attach_controller", 00:22:39.484 "req_id": 1 00:22:39.484 } 00:22:39.484 Got JSON-RPC error response 00:22:39.484 response: 00:22:39.484 { 00:22:39.484 "code": -5, 00:22:39.484 "message": "Input/output error" 00:22:39.484 } 00:22:39.484 09:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:39.484 09:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:39.484 09:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:39.484 09:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:39.484 09:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:39.484 09:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:39.742 00:22:39.742 09:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:22:39.742 09:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:39.742 09:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:22:39.999 09:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.999 09:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:39.999 09:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:40.257 09:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:22:40.257 09:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:22:40.257 09:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 748664 00:22:40.257 09:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 748664 ']' 00:22:40.257 09:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 748664 00:22:40.257 09:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:22:40.257 09:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:40.257 09:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 748664 00:22:40.257 09:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:40.257 09:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:40.257 09:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 748664' 00:22:40.257 killing process with pid 748664 00:22:40.257 09:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 748664 00:22:40.257 09:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 748664 00:22:40.824 09:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:40.824 09:32:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:40.824 09:32:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:22:40.824 09:32:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:40.824 09:32:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:22:40.824 09:32:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:40.824 09:32:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:40.824 rmmod nvme_tcp 00:22:40.824 rmmod nvme_fabrics 00:22:40.824 rmmod nvme_keyring 00:22:40.824 09:32:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:40.824 09:32:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:22:40.824 09:32:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:22:40.824 09:32:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 771124 ']' 00:22:40.824 09:32:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 771124 00:22:40.824 09:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 771124 ']' 00:22:40.824 09:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 771124 00:22:40.824 09:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:22:40.824 09:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:40.824 09:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 771124 00:22:40.824 09:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:40.824 09:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:40.824 09:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 771124' 00:22:40.824 killing process with pid 771124 00:22:40.824 09:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 771124 00:22:40.824 09:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 771124 00:22:41.083 09:32:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:41.083 09:32:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:41.083 09:32:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:41.083 09:32:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:41.083 09:32:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:41.084 09:32:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.084 09:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:41.084 09:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.613 09:32:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:43.613 09:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.1O1 /tmp/spdk.key-sha256.32b /tmp/spdk.key-sha384.2n2 /tmp/spdk.key-sha512.LMJ /tmp/spdk.key-sha512.gZK /tmp/spdk.key-sha384.ZGL /tmp/spdk.key-sha256.rMD '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:43.613 00:22:43.613 real 3m9.797s 00:22:43.613 user 7m21.854s 00:22:43.613 sys 0m24.997s 00:22:43.613 09:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:43.613 09:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.613 ************************************ 00:22:43.613 END TEST nvmf_auth_target 00:22:43.613 ************************************ 00:22:43.613 09:32:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:43.613 09:32:27 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:22:43.613 09:32:27 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:43.613 09:32:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:22:43.613 09:32:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:43.613 09:32:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:43.613 ************************************ 00:22:43.613 START TEST nvmf_bdevio_no_huge 00:22:43.613 ************************************ 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:43.613 * Looking for test storage... 00:22:43.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:22:43.613 09:32:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:45.516 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:45.516 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:22:45.516 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:45.516 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:45.516 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:45.516 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:45.516 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:45.516 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:22:45.516 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:45.516 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:22:45.516 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:22:45.516 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:22:45.516 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:22:45.516 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:45.517 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:45.517 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:45.517 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:45.517 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:45.517 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:45.517 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:22:45.517 00:22:45.517 --- 10.0.0.2 ping statistics --- 00:22:45.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.517 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:45.517 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:45.517 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:22:45.517 00:22:45.517 --- 10.0.0.1 ping statistics --- 00:22:45.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.517 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=773875 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 773875 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 773875 ']' 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:45.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:45.517 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:45.517 [2024-07-14 09:32:29.732436] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:22:45.517 [2024-07-14 09:32:29.732525] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:45.517 [2024-07-14 09:32:29.800443] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:45.517 [2024-07-14 09:32:29.879175] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:45.517 [2024-07-14 09:32:29.879240] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:45.517 [2024-07-14 09:32:29.879254] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:45.517 [2024-07-14 09:32:29.879265] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:45.517 [2024-07-14 09:32:29.879275] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:45.517 [2024-07-14 09:32:29.879364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:45.517 [2024-07-14 09:32:29.879491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:22:45.518 [2024-07-14 09:32:29.879560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:22:45.518 [2024-07-14 09:32:29.879563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:45.776 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:45.776 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:22:45.776 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:45.776 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:45.776 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:45.776 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:45.776 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:45.776 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.776 09:32:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:45.776 [2024-07-14 09:32:30.000376] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:45.776 09:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.776 09:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:45.776 09:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.776 09:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:45.776 Malloc0 00:22:45.776 09:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.776 09:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:45.776 09:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.776 09:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:45.776 09:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.776 09:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:45.776 09:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.776 09:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:45.776 09:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.776 09:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:45.776 09:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.776 09:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:45.776 [2024-07-14 09:32:30.040665] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:45.776 09:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.776 09:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:45.776 09:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:45.776 09:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:22:45.776 09:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:22:45.776 09:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:45.776 09:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:45.776 { 00:22:45.776 "params": { 00:22:45.776 "name": "Nvme$subsystem", 00:22:45.776 "trtype": "$TEST_TRANSPORT", 00:22:45.776 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.776 "adrfam": "ipv4", 00:22:45.776 "trsvcid": "$NVMF_PORT", 00:22:45.776 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.776 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.776 "hdgst": ${hdgst:-false}, 00:22:45.776 "ddgst": ${ddgst:-false} 00:22:45.776 }, 00:22:45.776 "method": "bdev_nvme_attach_controller" 00:22:45.776 } 00:22:45.776 EOF 00:22:45.776 )") 00:22:45.776 09:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:22:45.776 09:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:22:45.776 09:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:22:45.776 09:32:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:45.776 "params": { 00:22:45.776 "name": "Nvme1", 00:22:45.776 "trtype": "tcp", 00:22:45.776 "traddr": "10.0.0.2", 00:22:45.776 "adrfam": "ipv4", 00:22:45.776 "trsvcid": "4420", 00:22:45.776 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.777 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:45.777 "hdgst": false, 00:22:45.777 "ddgst": false 00:22:45.777 }, 00:22:45.777 "method": "bdev_nvme_attach_controller" 00:22:45.777 }' 00:22:45.777 [2024-07-14 09:32:30.087449] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:22:45.777 [2024-07-14 09:32:30.087539] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid773906 ] 00:22:45.777 [2024-07-14 09:32:30.147262] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:46.035 [2024-07-14 09:32:30.238498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:46.035 [2024-07-14 09:32:30.238543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:46.035 [2024-07-14 09:32:30.238546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:46.035 I/O targets: 00:22:46.035 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:46.035 00:22:46.035 00:22:46.035 CUnit - A unit testing framework for C - Version 2.1-3 00:22:46.035 http://cunit.sourceforge.net/ 00:22:46.035 00:22:46.035 00:22:46.035 Suite: bdevio tests on: Nvme1n1 00:22:46.035 Test: blockdev write read block ...passed 00:22:46.293 Test: blockdev write zeroes read block ...passed 00:22:46.293 Test: blockdev write zeroes read no split ...passed 00:22:46.293 Test: blockdev write zeroes read split ...passed 00:22:46.293 Test: blockdev write zeroes read split partial ...passed 00:22:46.293 Test: blockdev reset ...[2024-07-14 09:32:30.654426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:46.293 [2024-07-14 09:32:30.654540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5564e0 (9): Bad file descriptor 00:22:46.293 [2024-07-14 09:32:30.673649] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:46.293 passed 00:22:46.293 Test: blockdev write read 8 blocks ...passed 00:22:46.293 Test: blockdev write read size > 128k ...passed 00:22:46.293 Test: blockdev write read invalid size ...passed 00:22:46.551 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:46.551 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:46.551 Test: blockdev write read max offset ...passed 00:22:46.551 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:46.551 Test: blockdev writev readv 8 blocks ...passed 00:22:46.551 Test: blockdev writev readv 30 x 1block ...passed 00:22:46.551 Test: blockdev writev readv block ...passed 00:22:46.551 Test: blockdev writev readv size > 128k ...passed 00:22:46.551 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:46.551 Test: blockdev comparev and writev ...[2024-07-14 09:32:30.978667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:46.551 [2024-07-14 09:32:30.978702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.551 [2024-07-14 09:32:30.978726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:46.551 [2024-07-14 09:32:30.978743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:46.551 [2024-07-14 09:32:30.979181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:46.551 [2024-07-14 09:32:30.979209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:46.551 [2024-07-14 09:32:30.979233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:46.551 [2024-07-14 09:32:30.979249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:46.551 [2024-07-14 09:32:30.979702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:46.551 [2024-07-14 09:32:30.979729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:46.551 [2024-07-14 09:32:30.979753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:46.551 [2024-07-14 09:32:30.979769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:46.551 [2024-07-14 09:32:30.980210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:46.551 [2024-07-14 09:32:30.980247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:46.551 [2024-07-14 09:32:30.980282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:46.551 [2024-07-14 09:32:30.980302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:46.810 passed 00:22:46.810 Test: blockdev nvme passthru rw ...passed 00:22:46.810 Test: blockdev nvme passthru vendor specific ...[2024-07-14 09:32:31.064327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:46.810 [2024-07-14 09:32:31.064356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:46.810 [2024-07-14 09:32:31.064634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:46.810 [2024-07-14 09:32:31.064659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:46.810 [2024-07-14 09:32:31.064933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:46.810 [2024-07-14 09:32:31.064958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:46.810 [2024-07-14 09:32:31.065236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:46.810 [2024-07-14 09:32:31.065260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:46.810 passed 00:22:46.810 Test: blockdev nvme admin passthru ...passed 00:22:46.810 Test: blockdev copy ...passed 00:22:46.810 00:22:46.810 Run Summary: Type Total Ran Passed Failed Inactive 00:22:46.810 suites 1 1 n/a 0 0 00:22:46.810 tests 23 23 23 0 0 00:22:46.810 asserts 152 152 152 0 n/a 00:22:46.810 00:22:46.810 Elapsed time = 1.368 seconds 00:22:47.068 09:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:47.068 09:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.068 09:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:47.068 09:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.068 09:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:47.068 09:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:47.068 09:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:47.068 09:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:22:47.068 09:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:47.068 09:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:22:47.068 09:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:47.068 09:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:47.068 rmmod nvme_tcp 00:22:47.068 rmmod nvme_fabrics 00:22:47.068 rmmod nvme_keyring 00:22:47.068 09:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:47.068 09:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:22:47.068 09:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:22:47.068 09:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 773875 ']' 00:22:47.068 09:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 773875 00:22:47.068 09:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 773875 ']' 00:22:47.068 09:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 773875 00:22:47.068 09:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:22:47.068 09:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:47.068 09:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 773875 00:22:47.326 09:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:22:47.326 09:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:22:47.326 09:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 773875' 00:22:47.326 killing process with pid 773875 00:22:47.326 09:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 773875 00:22:47.326 09:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 773875 00:22:47.583 09:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:47.583 09:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:47.583 09:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:47.583 09:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:47.583 09:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:47.583 09:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.583 09:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:47.583 09:32:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.494 09:32:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:49.494 00:22:49.494 real 0m6.407s 00:22:49.494 user 0m10.656s 00:22:49.494 sys 0m2.479s 00:22:49.494 09:32:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:49.494 09:32:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:49.494 ************************************ 00:22:49.494 END TEST nvmf_bdevio_no_huge 00:22:49.494 ************************************ 00:22:49.753 09:32:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:49.753 09:32:33 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:49.753 09:32:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:49.753 09:32:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:49.753 09:32:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:49.753 ************************************ 00:22:49.753 START TEST nvmf_tls 00:22:49.753 ************************************ 00:22:49.753 09:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:49.753 * Looking for test storage... 00:22:49.753 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:49.753 09:32:34 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:49.753 09:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:49.753 09:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:49.753 09:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:49.753 09:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:49.753 09:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:49.753 09:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:49.753 09:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:49.753 09:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:49.753 09:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:49.753 09:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:49.753 09:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:49.753 09:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:49.753 09:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:49.753 09:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:49.753 09:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:49.753 09:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:49.753 09:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:49.753 09:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:49.753 09:32:34 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:49.753 09:32:34 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:49.753 09:32:34 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:49.753 09:32:34 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.753 09:32:34 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.753 09:32:34 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.753 09:32:34 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:49.753 09:32:34 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.753 09:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:22:49.753 09:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:49.753 09:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:49.753 09:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:49.753 09:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:49.753 09:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:49.753 09:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:49.753 09:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:49.753 09:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:49.753 09:32:34 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:49.753 09:32:34 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:22:49.753 09:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:49.753 09:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:49.753 09:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:49.753 09:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:49.753 09:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:49.753 09:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.753 09:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:49.753 09:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.753 09:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:49.753 09:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:49.753 09:32:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:22:49.753 09:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:51.655 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:51.655 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:51.655 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:51.655 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:51.655 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:51.656 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:51.656 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:51.656 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:51.656 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:51.656 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:51.656 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:51.656 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:51.656 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:51.656 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:51.656 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:51.656 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:51.656 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:51.656 09:32:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:51.656 09:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:51.656 09:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:51.656 09:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:51.656 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:51.656 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:22:51.656 00:22:51.656 --- 10.0.0.2 ping statistics --- 00:22:51.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.656 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:22:51.656 09:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:51.656 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:51.656 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:22:51.656 00:22:51.656 --- 10.0.0.1 ping statistics --- 00:22:51.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.656 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:22:51.656 09:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:51.656 09:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:22:51.656 09:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:51.656 09:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:51.656 09:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:51.656 09:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:51.656 09:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:51.656 09:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:51.656 09:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:51.656 09:32:36 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:51.656 09:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:51.656 09:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:51.656 09:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.656 09:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=776036 00:22:51.656 09:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:51.656 09:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 776036 00:22:51.656 09:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 776036 ']' 00:22:51.656 09:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.656 09:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:51.656 09:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.656 09:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:51.656 09:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.914 [2024-07-14 09:32:36.131208] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:22:51.914 [2024-07-14 09:32:36.131296] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.914 EAL: No free 2048 kB hugepages reported on node 1 00:22:51.914 [2024-07-14 09:32:36.201173] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.914 [2024-07-14 09:32:36.292582] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.914 [2024-07-14 09:32:36.292633] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.914 [2024-07-14 09:32:36.292647] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.914 [2024-07-14 09:32:36.292658] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.914 [2024-07-14 09:32:36.292668] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.914 [2024-07-14 09:32:36.292693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:51.914 09:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:51.914 09:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:51.914 09:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:51.914 09:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:51.914 09:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.914 09:32:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:51.914 09:32:36 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:22:51.914 09:32:36 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:52.172 true 00:22:52.172 09:32:36 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:52.172 09:32:36 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:22:52.430 09:32:36 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:22:52.430 09:32:36 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:22:52.430 09:32:36 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:52.688 09:32:37 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:52.688 09:32:37 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:22:52.945 09:32:37 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:22:52.945 09:32:37 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:22:52.945 09:32:37 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:53.203 09:32:37 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:53.203 09:32:37 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:22:53.461 09:32:37 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:22:53.461 09:32:37 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:22:53.461 09:32:37 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:53.461 09:32:37 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:22:53.719 09:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:22:53.719 09:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:22:53.719 09:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:54.285 09:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:54.285 09:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:22:54.285 09:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:22:54.285 09:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:22:54.285 09:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:54.543 09:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:54.543 09:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:22:54.800 09:32:39 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:22:54.800 09:32:39 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:22:54.800 09:32:39 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:54.800 09:32:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:54.800 09:32:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:54.800 09:32:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:54.800 09:32:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:54.800 09:32:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:54.800 09:32:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:54.800 09:32:39 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:54.800 09:32:39 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:54.800 09:32:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:54.800 09:32:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:54.800 09:32:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:54.800 09:32:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:22:54.800 09:32:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:54.800 09:32:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:55.058 09:32:39 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:55.058 09:32:39 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:22:55.058 09:32:39 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.bE6h8Ht9ZF 00:22:55.058 09:32:39 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:55.058 09:32:39 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.61orDEHT65 00:22:55.058 09:32:39 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:55.058 09:32:39 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:55.058 09:32:39 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.bE6h8Ht9ZF 00:22:55.058 09:32:39 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.61orDEHT65 00:22:55.058 09:32:39 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:55.317 09:32:39 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:55.575 09:32:39 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.bE6h8Ht9ZF 00:22:55.575 09:32:39 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.bE6h8Ht9ZF 00:22:55.575 09:32:39 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:55.833 [2024-07-14 09:32:40.122364] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:55.833 09:32:40 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:56.091 09:32:40 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:56.350 [2024-07-14 09:32:40.615709] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:56.350 [2024-07-14 09:32:40.615956] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:56.350 09:32:40 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:56.608 malloc0 00:22:56.608 09:32:40 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:56.866 09:32:41 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.bE6h8Ht9ZF 00:22:57.125 [2024-07-14 09:32:41.373063] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:57.125 09:32:41 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.bE6h8Ht9ZF 00:22:57.125 EAL: No free 2048 kB hugepages reported on node 1 00:23:07.103 Initializing NVMe Controllers 00:23:07.103 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:07.103 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:07.103 Initialization complete. Launching workers. 00:23:07.103 ======================================================== 00:23:07.103 Latency(us) 00:23:07.103 Device Information : IOPS MiB/s Average min max 00:23:07.103 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7574.50 29.59 8452.36 1255.06 10345.80 00:23:07.103 ======================================================== 00:23:07.103 Total : 7574.50 29.59 8452.36 1255.06 10345.80 00:23:07.103 00:23:07.103 09:32:51 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.bE6h8Ht9ZF 00:23:07.103 09:32:51 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:07.103 09:32:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:07.103 09:32:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:07.103 09:32:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.bE6h8Ht9ZF' 00:23:07.103 09:32:51 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:07.103 09:32:51 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=777861 00:23:07.103 09:32:51 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:07.103 09:32:51 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:07.103 09:32:51 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 777861 /var/tmp/bdevperf.sock 00:23:07.103 09:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 777861 ']' 00:23:07.103 09:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:07.103 09:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:07.103 09:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:07.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:07.103 09:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:07.104 09:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:07.104 [2024-07-14 09:32:51.540318] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:07.104 [2024-07-14 09:32:51.540390] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid777861 ] 00:23:07.361 EAL: No free 2048 kB hugepages reported on node 1 00:23:07.361 [2024-07-14 09:32:51.597655] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.361 [2024-07-14 09:32:51.681832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:07.361 09:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:07.361 09:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:07.361 09:32:51 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.bE6h8Ht9ZF 00:23:07.924 [2024-07-14 09:32:52.074841] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:07.924 [2024-07-14 09:32:52.074978] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:07.924 TLSTESTn1 00:23:07.924 09:32:52 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:07.924 Running I/O for 10 seconds... 00:23:20.108 00:23:20.108 Latency(us) 00:23:20.108 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.108 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:20.108 Verification LBA range: start 0x0 length 0x2000 00:23:20.108 TLSTESTn1 : 10.07 1399.09 5.47 0.00 0.00 91223.27 9563.40 129712.73 00:23:20.108 =================================================================================================================== 00:23:20.108 Total : 1399.09 5.47 0.00 0.00 91223.27 9563.40 129712.73 00:23:20.108 0 00:23:20.108 09:33:02 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:20.108 09:33:02 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 777861 00:23:20.108 09:33:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 777861 ']' 00:23:20.108 09:33:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 777861 00:23:20.108 09:33:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:20.108 09:33:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:20.108 09:33:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 777861 00:23:20.108 09:33:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:20.108 09:33:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:20.108 09:33:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 777861' 00:23:20.108 killing process with pid 777861 00:23:20.108 09:33:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 777861 00:23:20.108 Received shutdown signal, test time was about 10.000000 seconds 00:23:20.108 00:23:20.108 Latency(us) 00:23:20.108 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.108 =================================================================================================================== 00:23:20.108 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:20.108 [2024-07-14 09:33:02.428500] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:20.108 09:33:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 777861 00:23:20.108 09:33:02 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.61orDEHT65 00:23:20.108 09:33:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:20.108 09:33:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.61orDEHT65 00:23:20.108 09:33:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:20.108 09:33:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:20.108 09:33:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:20.108 09:33:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:20.108 09:33:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.61orDEHT65 00:23:20.108 09:33:02 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:20.108 09:33:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:20.108 09:33:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:20.108 09:33:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.61orDEHT65' 00:23:20.108 09:33:02 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:20.108 09:33:02 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=779284 00:23:20.108 09:33:02 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:20.108 09:33:02 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:20.108 09:33:02 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 779284 /var/tmp/bdevperf.sock 00:23:20.108 09:33:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 779284 ']' 00:23:20.108 09:33:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:20.108 09:33:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:20.108 09:33:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:20.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:20.108 09:33:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:20.108 09:33:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:20.108 [2024-07-14 09:33:02.694936] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:20.108 [2024-07-14 09:33:02.695018] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid779284 ] 00:23:20.108 EAL: No free 2048 kB hugepages reported on node 1 00:23:20.108 [2024-07-14 09:33:02.753628] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.108 [2024-07-14 09:33:02.834847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:20.108 09:33:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:20.108 09:33:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:20.108 09:33:02 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.61orDEHT65 00:23:20.108 [2024-07-14 09:33:03.174489] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:20.108 [2024-07-14 09:33:03.174614] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:20.108 [2024-07-14 09:33:03.180154] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:20.108 [2024-07-14 09:33:03.180669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2517ab0 (107): Transport endpoint is not connected 00:23:20.108 [2024-07-14 09:33:03.181657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2517ab0 (9): Bad file descriptor 00:23:20.108 [2024-07-14 09:33:03.182656] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:20.108 [2024-07-14 09:33:03.182675] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:20.108 [2024-07-14 09:33:03.182707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:20.108 request: 00:23:20.108 { 00:23:20.108 "name": "TLSTEST", 00:23:20.108 "trtype": "tcp", 00:23:20.108 "traddr": "10.0.0.2", 00:23:20.108 "adrfam": "ipv4", 00:23:20.108 "trsvcid": "4420", 00:23:20.108 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.108 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:20.108 "prchk_reftag": false, 00:23:20.108 "prchk_guard": false, 00:23:20.108 "hdgst": false, 00:23:20.108 "ddgst": false, 00:23:20.108 "psk": "/tmp/tmp.61orDEHT65", 00:23:20.108 "method": "bdev_nvme_attach_controller", 00:23:20.108 "req_id": 1 00:23:20.108 } 00:23:20.108 Got JSON-RPC error response 00:23:20.108 response: 00:23:20.108 { 00:23:20.108 "code": -5, 00:23:20.108 "message": "Input/output error" 00:23:20.108 } 00:23:20.108 09:33:03 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 779284 00:23:20.108 09:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 779284 ']' 00:23:20.108 09:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 779284 00:23:20.108 09:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:20.108 09:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:20.109 09:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 779284 00:23:20.109 09:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:20.109 09:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:20.109 09:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 779284' 00:23:20.109 killing process with pid 779284 00:23:20.109 09:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 779284 00:23:20.109 Received shutdown signal, test time was about 10.000000 seconds 00:23:20.109 00:23:20.109 Latency(us) 00:23:20.109 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.109 =================================================================================================================== 00:23:20.109 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:20.109 [2024-07-14 09:33:03.235014] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:20.109 09:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 779284 00:23:20.109 09:33:03 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:20.109 09:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:20.109 09:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:20.109 09:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:20.109 09:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:20.109 09:33:03 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.bE6h8Ht9ZF 00:23:20.109 09:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:20.109 09:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.bE6h8Ht9ZF 00:23:20.109 09:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:20.109 09:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:20.109 09:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:20.109 09:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:20.109 09:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.bE6h8Ht9ZF 00:23:20.109 09:33:03 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:20.109 09:33:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:20.109 09:33:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:20.109 09:33:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.bE6h8Ht9ZF' 00:23:20.109 09:33:03 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:20.109 09:33:03 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=779308 00:23:20.109 09:33:03 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:20.109 09:33:03 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:20.109 09:33:03 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 779308 /var/tmp/bdevperf.sock 00:23:20.109 09:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 779308 ']' 00:23:20.109 09:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:20.109 09:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:20.109 09:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:20.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:20.109 09:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:20.109 09:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:20.109 [2024-07-14 09:33:03.501746] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:20.109 [2024-07-14 09:33:03.501859] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid779308 ] 00:23:20.109 EAL: No free 2048 kB hugepages reported on node 1 00:23:20.109 [2024-07-14 09:33:03.565498] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.109 [2024-07-14 09:33:03.653281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:20.109 09:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:20.109 09:33:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:20.109 09:33:03 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.bE6h8Ht9ZF 00:23:20.109 [2024-07-14 09:33:03.993125] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:20.109 [2024-07-14 09:33:03.993279] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:20.109 [2024-07-14 09:33:04.002173] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:20.109 [2024-07-14 09:33:04.002206] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:20.109 [2024-07-14 09:33:04.002270] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:20.109 [2024-07-14 09:33:04.003378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2139ab0 (107): Transport endpoint is not connected 00:23:20.109 [2024-07-14 09:33:04.004369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2139ab0 (9): Bad file descriptor 00:23:20.109 [2024-07-14 09:33:04.005368] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:20.109 [2024-07-14 09:33:04.005387] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:20.109 [2024-07-14 09:33:04.005431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:20.109 request: 00:23:20.109 { 00:23:20.109 "name": "TLSTEST", 00:23:20.109 "trtype": "tcp", 00:23:20.109 "traddr": "10.0.0.2", 00:23:20.109 "adrfam": "ipv4", 00:23:20.109 "trsvcid": "4420", 00:23:20.109 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.109 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:20.109 "prchk_reftag": false, 00:23:20.109 "prchk_guard": false, 00:23:20.109 "hdgst": false, 00:23:20.109 "ddgst": false, 00:23:20.109 "psk": "/tmp/tmp.bE6h8Ht9ZF", 00:23:20.109 "method": "bdev_nvme_attach_controller", 00:23:20.109 "req_id": 1 00:23:20.109 } 00:23:20.109 Got JSON-RPC error response 00:23:20.109 response: 00:23:20.109 { 00:23:20.109 "code": -5, 00:23:20.109 "message": "Input/output error" 00:23:20.109 } 00:23:20.109 09:33:04 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 779308 00:23:20.109 09:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 779308 ']' 00:23:20.109 09:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 779308 00:23:20.109 09:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:20.109 09:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:20.109 09:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 779308 00:23:20.109 09:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:20.109 09:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:20.109 09:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 779308' 00:23:20.109 killing process with pid 779308 00:23:20.109 09:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 779308 00:23:20.109 Received shutdown signal, test time was about 10.000000 seconds 00:23:20.109 00:23:20.109 Latency(us) 00:23:20.109 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.109 =================================================================================================================== 00:23:20.109 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:20.109 [2024-07-14 09:33:04.058550] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:20.109 09:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 779308 00:23:20.109 09:33:04 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:20.109 09:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:20.109 09:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:20.109 09:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:20.109 09:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:20.109 09:33:04 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.bE6h8Ht9ZF 00:23:20.109 09:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:20.109 09:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.bE6h8Ht9ZF 00:23:20.109 09:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:20.109 09:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:20.109 09:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:20.109 09:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:20.109 09:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.bE6h8Ht9ZF 00:23:20.109 09:33:04 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:20.109 09:33:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:20.109 09:33:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:20.109 09:33:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.bE6h8Ht9ZF' 00:23:20.109 09:33:04 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:20.109 09:33:04 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=779444 00:23:20.109 09:33:04 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:20.109 09:33:04 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:20.109 09:33:04 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 779444 /var/tmp/bdevperf.sock 00:23:20.109 09:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 779444 ']' 00:23:20.109 09:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:20.109 09:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:20.109 09:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:20.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:20.109 09:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:20.109 09:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:20.109 [2024-07-14 09:33:04.319762] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:20.109 [2024-07-14 09:33:04.319843] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid779444 ] 00:23:20.109 EAL: No free 2048 kB hugepages reported on node 1 00:23:20.109 [2024-07-14 09:33:04.379153] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.109 [2024-07-14 09:33:04.469758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:20.367 09:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:20.367 09:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:20.367 09:33:04 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.bE6h8Ht9ZF 00:23:20.624 [2024-07-14 09:33:04.859250] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:20.624 [2024-07-14 09:33:04.859382] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:20.624 [2024-07-14 09:33:04.864708] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:20.624 [2024-07-14 09:33:04.864742] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:20.624 [2024-07-14 09:33:04.864783] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:20.624 [2024-07-14 09:33:04.865283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b32ab0 (107): Transport endpoint is not connected 00:23:20.624 [2024-07-14 09:33:04.866269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b32ab0 (9): Bad file descriptor 00:23:20.624 [2024-07-14 09:33:04.867268] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:20.624 [2024-07-14 09:33:04.867288] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:20.624 [2024-07-14 09:33:04.867319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:20.624 request: 00:23:20.624 { 00:23:20.624 "name": "TLSTEST", 00:23:20.624 "trtype": "tcp", 00:23:20.624 "traddr": "10.0.0.2", 00:23:20.624 "adrfam": "ipv4", 00:23:20.624 "trsvcid": "4420", 00:23:20.624 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:20.624 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:20.624 "prchk_reftag": false, 00:23:20.624 "prchk_guard": false, 00:23:20.624 "hdgst": false, 00:23:20.624 "ddgst": false, 00:23:20.624 "psk": "/tmp/tmp.bE6h8Ht9ZF", 00:23:20.624 "method": "bdev_nvme_attach_controller", 00:23:20.624 "req_id": 1 00:23:20.624 } 00:23:20.624 Got JSON-RPC error response 00:23:20.624 response: 00:23:20.624 { 00:23:20.624 "code": -5, 00:23:20.624 "message": "Input/output error" 00:23:20.624 } 00:23:20.624 09:33:04 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 779444 00:23:20.624 09:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 779444 ']' 00:23:20.624 09:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 779444 00:23:20.624 09:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:20.624 09:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:20.624 09:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 779444 00:23:20.624 09:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:20.624 09:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:20.624 09:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 779444' 00:23:20.624 killing process with pid 779444 00:23:20.624 09:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 779444 00:23:20.624 Received shutdown signal, test time was about 10.000000 seconds 00:23:20.624 00:23:20.624 Latency(us) 00:23:20.624 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.624 =================================================================================================================== 00:23:20.624 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:20.624 [2024-07-14 09:33:04.921586] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:20.624 09:33:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 779444 00:23:20.882 09:33:05 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:20.882 09:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:20.882 09:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:20.882 09:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:20.882 09:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:20.882 09:33:05 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:20.882 09:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:20.882 09:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:20.882 09:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:20.882 09:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:20.882 09:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:20.882 09:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:20.882 09:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:20.882 09:33:05 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:20.882 09:33:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:20.882 09:33:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:20.882 09:33:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:20.882 09:33:05 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:20.882 09:33:05 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=779571 00:23:20.882 09:33:05 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:20.882 09:33:05 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:20.882 09:33:05 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 779571 /var/tmp/bdevperf.sock 00:23:20.882 09:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 779571 ']' 00:23:20.882 09:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:20.882 09:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:20.882 09:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:20.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:20.882 09:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:20.882 09:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:20.882 [2024-07-14 09:33:05.188781] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:20.882 [2024-07-14 09:33:05.188888] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid779571 ] 00:23:20.882 EAL: No free 2048 kB hugepages reported on node 1 00:23:20.882 [2024-07-14 09:33:05.247573] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.882 [2024-07-14 09:33:05.330957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:21.139 09:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:21.139 09:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:21.139 09:33:05 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:21.397 [2024-07-14 09:33:05.676508] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:21.397 [2024-07-14 09:33:05.677862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb1be60 (9): Bad file descriptor 00:23:21.397 [2024-07-14 09:33:05.678849] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:21.397 [2024-07-14 09:33:05.678873] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:21.397 [2024-07-14 09:33:05.678905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:21.397 request: 00:23:21.397 { 00:23:21.397 "name": "TLSTEST", 00:23:21.397 "trtype": "tcp", 00:23:21.397 "traddr": "10.0.0.2", 00:23:21.397 "adrfam": "ipv4", 00:23:21.397 "trsvcid": "4420", 00:23:21.397 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:21.397 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:21.397 "prchk_reftag": false, 00:23:21.397 "prchk_guard": false, 00:23:21.397 "hdgst": false, 00:23:21.397 "ddgst": false, 00:23:21.397 "method": "bdev_nvme_attach_controller", 00:23:21.397 "req_id": 1 00:23:21.397 } 00:23:21.397 Got JSON-RPC error response 00:23:21.397 response: 00:23:21.397 { 00:23:21.397 "code": -5, 00:23:21.397 "message": "Input/output error" 00:23:21.397 } 00:23:21.397 09:33:05 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 779571 00:23:21.397 09:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 779571 ']' 00:23:21.397 09:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 779571 00:23:21.397 09:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:21.397 09:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:21.397 09:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 779571 00:23:21.397 09:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:21.397 09:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:21.397 09:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 779571' 00:23:21.397 killing process with pid 779571 00:23:21.397 09:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 779571 00:23:21.397 Received shutdown signal, test time was about 10.000000 seconds 00:23:21.397 00:23:21.397 Latency(us) 00:23:21.397 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:21.397 =================================================================================================================== 00:23:21.397 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:21.397 09:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 779571 00:23:21.655 09:33:05 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:21.655 09:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:21.655 09:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:21.655 09:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:21.655 09:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:21.655 09:33:05 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 776036 00:23:21.655 09:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 776036 ']' 00:23:21.655 09:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 776036 00:23:21.655 09:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:21.655 09:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:21.655 09:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 776036 00:23:21.655 09:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:21.655 09:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:21.655 09:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 776036' 00:23:21.655 killing process with pid 776036 00:23:21.655 09:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 776036 00:23:21.655 [2024-07-14 09:33:05.944621] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:21.655 09:33:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 776036 00:23:21.912 09:33:06 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:21.912 09:33:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:21.912 09:33:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:21.912 09:33:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:21.912 09:33:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:21.912 09:33:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:23:21.912 09:33:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:21.912 09:33:06 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:21.912 09:33:06 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:23:21.912 09:33:06 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.n1promfwBG 00:23:21.912 09:33:06 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:21.912 09:33:06 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.n1promfwBG 00:23:21.912 09:33:06 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:23:21.912 09:33:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:21.912 09:33:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:21.912 09:33:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:21.912 09:33:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=779727 00:23:21.912 09:33:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:21.912 09:33:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 779727 00:23:21.912 09:33:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 779727 ']' 00:23:21.912 09:33:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:21.912 09:33:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:21.912 09:33:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:21.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:21.912 09:33:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:21.912 09:33:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:21.912 [2024-07-14 09:33:06.293284] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:21.913 [2024-07-14 09:33:06.293378] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:21.913 EAL: No free 2048 kB hugepages reported on node 1 00:23:21.913 [2024-07-14 09:33:06.361382] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.170 [2024-07-14 09:33:06.448604] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:22.170 [2024-07-14 09:33:06.448670] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:22.170 [2024-07-14 09:33:06.448687] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:22.170 [2024-07-14 09:33:06.448700] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:22.170 [2024-07-14 09:33:06.448713] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:22.170 [2024-07-14 09:33:06.448750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:22.170 09:33:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:22.170 09:33:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:22.170 09:33:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:22.170 09:33:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:22.170 09:33:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.170 09:33:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:22.170 09:33:06 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.n1promfwBG 00:23:22.170 09:33:06 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.n1promfwBG 00:23:22.170 09:33:06 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:22.428 [2024-07-14 09:33:06.821562] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:22.428 09:33:06 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:22.686 09:33:07 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:22.943 [2024-07-14 09:33:07.318914] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:22.943 [2024-07-14 09:33:07.319175] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:22.943 09:33:07 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:23.200 malloc0 00:23:23.200 09:33:07 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:23.458 09:33:07 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.n1promfwBG 00:23:23.715 [2024-07-14 09:33:08.120503] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:23.715 09:33:08 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.n1promfwBG 00:23:23.715 09:33:08 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:23.715 09:33:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:23.715 09:33:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:23.715 09:33:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.n1promfwBG' 00:23:23.715 09:33:08 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:23.715 09:33:08 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=780366 00:23:23.716 09:33:08 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:23.716 09:33:08 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:23.716 09:33:08 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 780366 /var/tmp/bdevperf.sock 00:23:23.716 09:33:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 780366 ']' 00:23:23.716 09:33:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:23.716 09:33:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:23.716 09:33:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:23.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:23.716 09:33:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:23.716 09:33:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:23.973 [2024-07-14 09:33:08.179782] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:23.973 [2024-07-14 09:33:08.179874] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid780366 ] 00:23:23.973 EAL: No free 2048 kB hugepages reported on node 1 00:23:23.973 [2024-07-14 09:33:08.237470] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.973 [2024-07-14 09:33:08.320750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:24.231 09:33:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:24.231 09:33:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:24.231 09:33:08 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.n1promfwBG 00:23:24.231 [2024-07-14 09:33:08.665277] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:24.231 [2024-07-14 09:33:08.665391] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:24.489 TLSTESTn1 00:23:24.489 09:33:08 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:24.489 Running I/O for 10 seconds... 00:23:36.695 00:23:36.695 Latency(us) 00:23:36.695 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.695 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:36.695 Verification LBA range: start 0x0 length 0x2000 00:23:36.695 TLSTESTn1 : 10.07 1464.23 5.72 0.00 0.00 87145.81 6310.87 127382.57 00:23:36.695 =================================================================================================================== 00:23:36.695 Total : 1464.23 5.72 0.00 0.00 87145.81 6310.87 127382.57 00:23:36.695 0 00:23:36.695 09:33:18 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:36.695 09:33:18 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 780366 00:23:36.695 09:33:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 780366 ']' 00:23:36.695 09:33:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 780366 00:23:36.695 09:33:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:36.695 09:33:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:36.695 09:33:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 780366 00:23:36.695 09:33:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:36.695 09:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:36.695 09:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 780366' 00:23:36.695 killing process with pid 780366 00:23:36.695 09:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 780366 00:23:36.695 Received shutdown signal, test time was about 10.000000 seconds 00:23:36.695 00:23:36.695 Latency(us) 00:23:36.695 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.695 =================================================================================================================== 00:23:36.695 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:36.695 [2024-07-14 09:33:19.001671] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:36.695 09:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 780366 00:23:36.695 09:33:19 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.n1promfwBG 00:23:36.695 09:33:19 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.n1promfwBG 00:23:36.695 09:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:36.695 09:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.n1promfwBG 00:23:36.695 09:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:36.695 09:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:36.695 09:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:36.695 09:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:36.695 09:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.n1promfwBG 00:23:36.695 09:33:19 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:36.695 09:33:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:36.695 09:33:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:36.695 09:33:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.n1promfwBG' 00:23:36.695 09:33:19 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:36.695 09:33:19 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=781706 00:23:36.695 09:33:19 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:36.695 09:33:19 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:36.695 09:33:19 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 781706 /var/tmp/bdevperf.sock 00:23:36.695 09:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 781706 ']' 00:23:36.695 09:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:36.695 09:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:36.695 09:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:36.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:36.695 09:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:36.695 09:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.695 [2024-07-14 09:33:19.276211] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:36.695 [2024-07-14 09:33:19.276288] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid781706 ] 00:23:36.695 EAL: No free 2048 kB hugepages reported on node 1 00:23:36.695 [2024-07-14 09:33:19.332986] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.695 [2024-07-14 09:33:19.421894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:36.695 09:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:36.695 09:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:36.695 09:33:19 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.n1promfwBG 00:23:36.695 [2024-07-14 09:33:19.751982] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:36.695 [2024-07-14 09:33:19.752081] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:36.695 [2024-07-14 09:33:19.752096] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.n1promfwBG 00:23:36.695 request: 00:23:36.695 { 00:23:36.695 "name": "TLSTEST", 00:23:36.695 "trtype": "tcp", 00:23:36.695 "traddr": "10.0.0.2", 00:23:36.695 "adrfam": "ipv4", 00:23:36.695 "trsvcid": "4420", 00:23:36.695 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.695 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:36.695 "prchk_reftag": false, 00:23:36.695 "prchk_guard": false, 00:23:36.695 "hdgst": false, 00:23:36.695 "ddgst": false, 00:23:36.695 "psk": "/tmp/tmp.n1promfwBG", 00:23:36.695 "method": "bdev_nvme_attach_controller", 00:23:36.695 "req_id": 1 00:23:36.695 } 00:23:36.695 Got JSON-RPC error response 00:23:36.695 response: 00:23:36.695 { 00:23:36.695 "code": -1, 00:23:36.695 "message": "Operation not permitted" 00:23:36.695 } 00:23:36.695 09:33:19 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 781706 00:23:36.695 09:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 781706 ']' 00:23:36.695 09:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 781706 00:23:36.695 09:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:36.695 09:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:36.695 09:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 781706 00:23:36.695 09:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:36.695 09:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:36.695 09:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 781706' 00:23:36.695 killing process with pid 781706 00:23:36.695 09:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 781706 00:23:36.695 Received shutdown signal, test time was about 10.000000 seconds 00:23:36.695 00:23:36.695 Latency(us) 00:23:36.696 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.696 =================================================================================================================== 00:23:36.696 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:36.696 09:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 781706 00:23:36.696 09:33:20 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:36.696 09:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:36.696 09:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:36.696 09:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:36.696 09:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:36.696 09:33:20 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 779727 00:23:36.696 09:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 779727 ']' 00:23:36.696 09:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 779727 00:23:36.696 09:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:36.696 09:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:36.696 09:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 779727 00:23:36.696 09:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:36.696 09:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:36.696 09:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 779727' 00:23:36.696 killing process with pid 779727 00:23:36.696 09:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 779727 00:23:36.696 [2024-07-14 09:33:20.050238] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:36.696 09:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 779727 00:23:36.696 09:33:20 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:23:36.696 09:33:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:36.696 09:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:36.696 09:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.696 09:33:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=781853 00:23:36.696 09:33:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:36.696 09:33:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 781853 00:23:36.696 09:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 781853 ']' 00:23:36.696 09:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:36.696 09:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:36.696 09:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:36.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:36.696 09:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:36.696 09:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.696 [2024-07-14 09:33:20.346326] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:36.696 [2024-07-14 09:33:20.346427] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:36.696 EAL: No free 2048 kB hugepages reported on node 1 00:23:36.696 [2024-07-14 09:33:20.409657] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.696 [2024-07-14 09:33:20.493566] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:36.696 [2024-07-14 09:33:20.493612] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:36.696 [2024-07-14 09:33:20.493641] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:36.696 [2024-07-14 09:33:20.493652] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:36.696 [2024-07-14 09:33:20.493662] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:36.696 [2024-07-14 09:33:20.493687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:36.696 09:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:36.696 09:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:36.696 09:33:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:36.696 09:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:36.696 09:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.696 09:33:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:36.696 09:33:20 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.n1promfwBG 00:23:36.696 09:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:36.696 09:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.n1promfwBG 00:23:36.696 09:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:23:36.696 09:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:36.696 09:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:23:36.696 09:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:36.696 09:33:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.n1promfwBG 00:23:36.696 09:33:20 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.n1promfwBG 00:23:36.696 09:33:20 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:36.696 [2024-07-14 09:33:20.843917] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:36.696 09:33:20 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:36.696 09:33:21 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:36.954 [2024-07-14 09:33:21.369306] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:36.954 [2024-07-14 09:33:21.369542] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:36.954 09:33:21 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:37.520 malloc0 00:23:37.520 09:33:21 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:37.520 09:33:21 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.n1promfwBG 00:23:37.777 [2024-07-14 09:33:22.215945] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:37.777 [2024-07-14 09:33:22.215996] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:23:37.777 [2024-07-14 09:33:22.216047] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:37.777 request: 00:23:37.777 { 00:23:37.777 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.777 "host": "nqn.2016-06.io.spdk:host1", 00:23:37.777 "psk": "/tmp/tmp.n1promfwBG", 00:23:37.777 "method": "nvmf_subsystem_add_host", 00:23:37.777 "req_id": 1 00:23:37.777 } 00:23:37.777 Got JSON-RPC error response 00:23:37.777 response: 00:23:37.777 { 00:23:37.777 "code": -32603, 00:23:37.777 "message": "Internal error" 00:23:37.777 } 00:23:38.034 09:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:38.034 09:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:38.034 09:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:38.034 09:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:38.034 09:33:22 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 781853 00:23:38.034 09:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 781853 ']' 00:23:38.034 09:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 781853 00:23:38.034 09:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:38.034 09:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:38.034 09:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 781853 00:23:38.034 09:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:38.034 09:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:38.034 09:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 781853' 00:23:38.034 killing process with pid 781853 00:23:38.034 09:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 781853 00:23:38.034 09:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 781853 00:23:38.292 09:33:22 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.n1promfwBG 00:23:38.292 09:33:22 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:23:38.292 09:33:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:38.292 09:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:38.292 09:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.292 09:33:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=782149 00:23:38.292 09:33:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:38.292 09:33:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 782149 00:23:38.292 09:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 782149 ']' 00:23:38.292 09:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:38.292 09:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:38.292 09:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:38.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:38.292 09:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:38.292 09:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.292 [2024-07-14 09:33:22.566111] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:38.292 [2024-07-14 09:33:22.566202] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:38.292 EAL: No free 2048 kB hugepages reported on node 1 00:23:38.292 [2024-07-14 09:33:22.633540] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.292 [2024-07-14 09:33:22.722080] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:38.292 [2024-07-14 09:33:22.722145] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:38.292 [2024-07-14 09:33:22.722162] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:38.292 [2024-07-14 09:33:22.722177] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:38.292 [2024-07-14 09:33:22.722190] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:38.292 [2024-07-14 09:33:22.722222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:38.549 09:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:38.549 09:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:38.549 09:33:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:38.549 09:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:38.549 09:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.549 09:33:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:38.549 09:33:22 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.n1promfwBG 00:23:38.549 09:33:22 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.n1promfwBG 00:23:38.549 09:33:22 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:38.807 [2024-07-14 09:33:23.085467] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:38.807 09:33:23 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:39.062 09:33:23 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:39.318 [2024-07-14 09:33:23.663032] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:39.318 [2024-07-14 09:33:23.663270] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:39.318 09:33:23 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:39.574 malloc0 00:23:39.574 09:33:23 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:39.831 09:33:24 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.n1promfwBG 00:23:40.089 [2024-07-14 09:33:24.404653] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:40.089 09:33:24 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=782435 00:23:40.089 09:33:24 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:40.089 09:33:24 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:40.089 09:33:24 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 782435 /var/tmp/bdevperf.sock 00:23:40.089 09:33:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 782435 ']' 00:23:40.089 09:33:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:40.089 09:33:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:40.089 09:33:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:40.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:40.089 09:33:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:40.089 09:33:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.089 [2024-07-14 09:33:24.466281] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:40.089 [2024-07-14 09:33:24.466351] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid782435 ] 00:23:40.089 EAL: No free 2048 kB hugepages reported on node 1 00:23:40.089 [2024-07-14 09:33:24.522878] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.347 [2024-07-14 09:33:24.607244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:40.347 09:33:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:40.347 09:33:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:40.347 09:33:24 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.n1promfwBG 00:23:40.604 [2024-07-14 09:33:24.930505] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:40.604 [2024-07-14 09:33:24.930637] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:40.604 TLSTESTn1 00:23:40.604 09:33:25 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:41.168 09:33:25 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:23:41.168 "subsystems": [ 00:23:41.168 { 00:23:41.168 "subsystem": "keyring", 00:23:41.168 "config": [] 00:23:41.168 }, 00:23:41.168 { 00:23:41.168 "subsystem": "iobuf", 00:23:41.168 "config": [ 00:23:41.168 { 00:23:41.168 "method": "iobuf_set_options", 00:23:41.168 "params": { 00:23:41.168 "small_pool_count": 8192, 00:23:41.168 "large_pool_count": 1024, 00:23:41.168 "small_bufsize": 8192, 00:23:41.168 "large_bufsize": 135168 00:23:41.168 } 00:23:41.168 } 00:23:41.168 ] 00:23:41.168 }, 00:23:41.168 { 00:23:41.168 "subsystem": "sock", 00:23:41.168 "config": [ 00:23:41.168 { 00:23:41.168 "method": "sock_set_default_impl", 00:23:41.168 "params": { 00:23:41.168 "impl_name": "posix" 00:23:41.168 } 00:23:41.168 }, 00:23:41.168 { 00:23:41.168 "method": "sock_impl_set_options", 00:23:41.168 "params": { 00:23:41.168 "impl_name": "ssl", 00:23:41.168 "recv_buf_size": 4096, 00:23:41.168 "send_buf_size": 4096, 00:23:41.168 "enable_recv_pipe": true, 00:23:41.168 "enable_quickack": false, 00:23:41.168 "enable_placement_id": 0, 00:23:41.168 "enable_zerocopy_send_server": true, 00:23:41.169 "enable_zerocopy_send_client": false, 00:23:41.169 "zerocopy_threshold": 0, 00:23:41.169 "tls_version": 0, 00:23:41.169 "enable_ktls": false 00:23:41.169 } 00:23:41.169 }, 00:23:41.169 { 00:23:41.169 "method": "sock_impl_set_options", 00:23:41.169 "params": { 00:23:41.169 "impl_name": "posix", 00:23:41.169 "recv_buf_size": 2097152, 00:23:41.169 "send_buf_size": 2097152, 00:23:41.169 "enable_recv_pipe": true, 00:23:41.169 "enable_quickack": false, 00:23:41.169 "enable_placement_id": 0, 00:23:41.169 "enable_zerocopy_send_server": true, 00:23:41.169 "enable_zerocopy_send_client": false, 00:23:41.169 "zerocopy_threshold": 0, 00:23:41.169 "tls_version": 0, 00:23:41.169 "enable_ktls": false 00:23:41.169 } 00:23:41.169 } 00:23:41.169 ] 00:23:41.169 }, 00:23:41.169 { 00:23:41.169 "subsystem": "vmd", 00:23:41.169 "config": [] 00:23:41.169 }, 00:23:41.169 { 00:23:41.169 "subsystem": "accel", 00:23:41.169 "config": [ 00:23:41.169 { 00:23:41.169 "method": "accel_set_options", 00:23:41.169 "params": { 00:23:41.169 "small_cache_size": 128, 00:23:41.169 "large_cache_size": 16, 00:23:41.169 "task_count": 2048, 00:23:41.169 "sequence_count": 2048, 00:23:41.169 "buf_count": 2048 00:23:41.169 } 00:23:41.169 } 00:23:41.169 ] 00:23:41.169 }, 00:23:41.169 { 00:23:41.169 "subsystem": "bdev", 00:23:41.169 "config": [ 00:23:41.169 { 00:23:41.169 "method": "bdev_set_options", 00:23:41.169 "params": { 00:23:41.169 "bdev_io_pool_size": 65535, 00:23:41.169 "bdev_io_cache_size": 256, 00:23:41.169 "bdev_auto_examine": true, 00:23:41.169 "iobuf_small_cache_size": 128, 00:23:41.169 "iobuf_large_cache_size": 16 00:23:41.169 } 00:23:41.169 }, 00:23:41.169 { 00:23:41.169 "method": "bdev_raid_set_options", 00:23:41.169 "params": { 00:23:41.169 "process_window_size_kb": 1024 00:23:41.169 } 00:23:41.169 }, 00:23:41.169 { 00:23:41.169 "method": "bdev_iscsi_set_options", 00:23:41.169 "params": { 00:23:41.169 "timeout_sec": 30 00:23:41.169 } 00:23:41.169 }, 00:23:41.169 { 00:23:41.169 "method": "bdev_nvme_set_options", 00:23:41.169 "params": { 00:23:41.169 "action_on_timeout": "none", 00:23:41.169 "timeout_us": 0, 00:23:41.169 "timeout_admin_us": 0, 00:23:41.169 "keep_alive_timeout_ms": 10000, 00:23:41.169 "arbitration_burst": 0, 00:23:41.169 "low_priority_weight": 0, 00:23:41.169 "medium_priority_weight": 0, 00:23:41.169 "high_priority_weight": 0, 00:23:41.169 "nvme_adminq_poll_period_us": 10000, 00:23:41.169 "nvme_ioq_poll_period_us": 0, 00:23:41.169 "io_queue_requests": 0, 00:23:41.169 "delay_cmd_submit": true, 00:23:41.169 "transport_retry_count": 4, 00:23:41.169 "bdev_retry_count": 3, 00:23:41.169 "transport_ack_timeout": 0, 00:23:41.169 "ctrlr_loss_timeout_sec": 0, 00:23:41.169 "reconnect_delay_sec": 0, 00:23:41.169 "fast_io_fail_timeout_sec": 0, 00:23:41.169 "disable_auto_failback": false, 00:23:41.169 "generate_uuids": false, 00:23:41.169 "transport_tos": 0, 00:23:41.169 "nvme_error_stat": false, 00:23:41.169 "rdma_srq_size": 0, 00:23:41.169 "io_path_stat": false, 00:23:41.169 "allow_accel_sequence": false, 00:23:41.169 "rdma_max_cq_size": 0, 00:23:41.169 "rdma_cm_event_timeout_ms": 0, 00:23:41.169 "dhchap_digests": [ 00:23:41.169 "sha256", 00:23:41.169 "sha384", 00:23:41.169 "sha512" 00:23:41.169 ], 00:23:41.169 "dhchap_dhgroups": [ 00:23:41.169 "null", 00:23:41.169 "ffdhe2048", 00:23:41.169 "ffdhe3072", 00:23:41.169 "ffdhe4096", 00:23:41.169 "ffdhe6144", 00:23:41.169 "ffdhe8192" 00:23:41.169 ] 00:23:41.169 } 00:23:41.169 }, 00:23:41.169 { 00:23:41.169 "method": "bdev_nvme_set_hotplug", 00:23:41.169 "params": { 00:23:41.169 "period_us": 100000, 00:23:41.169 "enable": false 00:23:41.169 } 00:23:41.169 }, 00:23:41.169 { 00:23:41.169 "method": "bdev_malloc_create", 00:23:41.169 "params": { 00:23:41.169 "name": "malloc0", 00:23:41.169 "num_blocks": 8192, 00:23:41.169 "block_size": 4096, 00:23:41.169 "physical_block_size": 4096, 00:23:41.169 "uuid": "fd66a387-9489-4368-9e1b-cd885109bf4b", 00:23:41.169 "optimal_io_boundary": 0 00:23:41.169 } 00:23:41.169 }, 00:23:41.169 { 00:23:41.169 "method": "bdev_wait_for_examine" 00:23:41.169 } 00:23:41.169 ] 00:23:41.169 }, 00:23:41.169 { 00:23:41.169 "subsystem": "nbd", 00:23:41.169 "config": [] 00:23:41.169 }, 00:23:41.169 { 00:23:41.169 "subsystem": "scheduler", 00:23:41.169 "config": [ 00:23:41.169 { 00:23:41.169 "method": "framework_set_scheduler", 00:23:41.169 "params": { 00:23:41.169 "name": "static" 00:23:41.169 } 00:23:41.169 } 00:23:41.169 ] 00:23:41.169 }, 00:23:41.169 { 00:23:41.169 "subsystem": "nvmf", 00:23:41.169 "config": [ 00:23:41.169 { 00:23:41.169 "method": "nvmf_set_config", 00:23:41.169 "params": { 00:23:41.169 "discovery_filter": "match_any", 00:23:41.169 "admin_cmd_passthru": { 00:23:41.169 "identify_ctrlr": false 00:23:41.169 } 00:23:41.169 } 00:23:41.169 }, 00:23:41.169 { 00:23:41.169 "method": "nvmf_set_max_subsystems", 00:23:41.169 "params": { 00:23:41.169 "max_subsystems": 1024 00:23:41.169 } 00:23:41.169 }, 00:23:41.169 { 00:23:41.169 "method": "nvmf_set_crdt", 00:23:41.169 "params": { 00:23:41.169 "crdt1": 0, 00:23:41.169 "crdt2": 0, 00:23:41.169 "crdt3": 0 00:23:41.169 } 00:23:41.169 }, 00:23:41.169 { 00:23:41.169 "method": "nvmf_create_transport", 00:23:41.169 "params": { 00:23:41.169 "trtype": "TCP", 00:23:41.169 "max_queue_depth": 128, 00:23:41.169 "max_io_qpairs_per_ctrlr": 127, 00:23:41.169 "in_capsule_data_size": 4096, 00:23:41.169 "max_io_size": 131072, 00:23:41.169 "io_unit_size": 131072, 00:23:41.169 "max_aq_depth": 128, 00:23:41.169 "num_shared_buffers": 511, 00:23:41.169 "buf_cache_size": 4294967295, 00:23:41.169 "dif_insert_or_strip": false, 00:23:41.169 "zcopy": false, 00:23:41.169 "c2h_success": false, 00:23:41.169 "sock_priority": 0, 00:23:41.169 "abort_timeout_sec": 1, 00:23:41.169 "ack_timeout": 0, 00:23:41.169 "data_wr_pool_size": 0 00:23:41.169 } 00:23:41.169 }, 00:23:41.169 { 00:23:41.169 "method": "nvmf_create_subsystem", 00:23:41.169 "params": { 00:23:41.169 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.169 "allow_any_host": false, 00:23:41.169 "serial_number": "SPDK00000000000001", 00:23:41.169 "model_number": "SPDK bdev Controller", 00:23:41.169 "max_namespaces": 10, 00:23:41.169 "min_cntlid": 1, 00:23:41.169 "max_cntlid": 65519, 00:23:41.169 "ana_reporting": false 00:23:41.169 } 00:23:41.169 }, 00:23:41.169 { 00:23:41.169 "method": "nvmf_subsystem_add_host", 00:23:41.169 "params": { 00:23:41.169 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.169 "host": "nqn.2016-06.io.spdk:host1", 00:23:41.169 "psk": "/tmp/tmp.n1promfwBG" 00:23:41.169 } 00:23:41.169 }, 00:23:41.169 { 00:23:41.169 "method": "nvmf_subsystem_add_ns", 00:23:41.169 "params": { 00:23:41.169 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.169 "namespace": { 00:23:41.169 "nsid": 1, 00:23:41.169 "bdev_name": "malloc0", 00:23:41.169 "nguid": "FD66A387948943689E1BCD885109BF4B", 00:23:41.169 "uuid": "fd66a387-9489-4368-9e1b-cd885109bf4b", 00:23:41.169 "no_auto_visible": false 00:23:41.169 } 00:23:41.169 } 00:23:41.169 }, 00:23:41.169 { 00:23:41.169 "method": "nvmf_subsystem_add_listener", 00:23:41.169 "params": { 00:23:41.169 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.169 "listen_address": { 00:23:41.169 "trtype": "TCP", 00:23:41.169 "adrfam": "IPv4", 00:23:41.169 "traddr": "10.0.0.2", 00:23:41.169 "trsvcid": "4420" 00:23:41.169 }, 00:23:41.169 "secure_channel": true 00:23:41.169 } 00:23:41.169 } 00:23:41.169 ] 00:23:41.169 } 00:23:41.169 ] 00:23:41.169 }' 00:23:41.169 09:33:25 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:41.428 09:33:25 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:23:41.428 "subsystems": [ 00:23:41.428 { 00:23:41.428 "subsystem": "keyring", 00:23:41.428 "config": [] 00:23:41.428 }, 00:23:41.428 { 00:23:41.428 "subsystem": "iobuf", 00:23:41.428 "config": [ 00:23:41.428 { 00:23:41.428 "method": "iobuf_set_options", 00:23:41.428 "params": { 00:23:41.428 "small_pool_count": 8192, 00:23:41.428 "large_pool_count": 1024, 00:23:41.428 "small_bufsize": 8192, 00:23:41.428 "large_bufsize": 135168 00:23:41.428 } 00:23:41.428 } 00:23:41.428 ] 00:23:41.428 }, 00:23:41.428 { 00:23:41.428 "subsystem": "sock", 00:23:41.428 "config": [ 00:23:41.428 { 00:23:41.428 "method": "sock_set_default_impl", 00:23:41.428 "params": { 00:23:41.428 "impl_name": "posix" 00:23:41.428 } 00:23:41.428 }, 00:23:41.428 { 00:23:41.428 "method": "sock_impl_set_options", 00:23:41.428 "params": { 00:23:41.428 "impl_name": "ssl", 00:23:41.428 "recv_buf_size": 4096, 00:23:41.428 "send_buf_size": 4096, 00:23:41.428 "enable_recv_pipe": true, 00:23:41.428 "enable_quickack": false, 00:23:41.428 "enable_placement_id": 0, 00:23:41.428 "enable_zerocopy_send_server": true, 00:23:41.428 "enable_zerocopy_send_client": false, 00:23:41.428 "zerocopy_threshold": 0, 00:23:41.428 "tls_version": 0, 00:23:41.428 "enable_ktls": false 00:23:41.428 } 00:23:41.428 }, 00:23:41.428 { 00:23:41.428 "method": "sock_impl_set_options", 00:23:41.428 "params": { 00:23:41.428 "impl_name": "posix", 00:23:41.428 "recv_buf_size": 2097152, 00:23:41.428 "send_buf_size": 2097152, 00:23:41.428 "enable_recv_pipe": true, 00:23:41.428 "enable_quickack": false, 00:23:41.428 "enable_placement_id": 0, 00:23:41.428 "enable_zerocopy_send_server": true, 00:23:41.428 "enable_zerocopy_send_client": false, 00:23:41.428 "zerocopy_threshold": 0, 00:23:41.428 "tls_version": 0, 00:23:41.428 "enable_ktls": false 00:23:41.428 } 00:23:41.428 } 00:23:41.428 ] 00:23:41.428 }, 00:23:41.428 { 00:23:41.428 "subsystem": "vmd", 00:23:41.428 "config": [] 00:23:41.428 }, 00:23:41.428 { 00:23:41.428 "subsystem": "accel", 00:23:41.428 "config": [ 00:23:41.428 { 00:23:41.428 "method": "accel_set_options", 00:23:41.428 "params": { 00:23:41.428 "small_cache_size": 128, 00:23:41.428 "large_cache_size": 16, 00:23:41.428 "task_count": 2048, 00:23:41.428 "sequence_count": 2048, 00:23:41.428 "buf_count": 2048 00:23:41.428 } 00:23:41.428 } 00:23:41.428 ] 00:23:41.428 }, 00:23:41.428 { 00:23:41.428 "subsystem": "bdev", 00:23:41.428 "config": [ 00:23:41.428 { 00:23:41.428 "method": "bdev_set_options", 00:23:41.428 "params": { 00:23:41.428 "bdev_io_pool_size": 65535, 00:23:41.428 "bdev_io_cache_size": 256, 00:23:41.428 "bdev_auto_examine": true, 00:23:41.428 "iobuf_small_cache_size": 128, 00:23:41.428 "iobuf_large_cache_size": 16 00:23:41.428 } 00:23:41.428 }, 00:23:41.428 { 00:23:41.428 "method": "bdev_raid_set_options", 00:23:41.428 "params": { 00:23:41.428 "process_window_size_kb": 1024 00:23:41.428 } 00:23:41.428 }, 00:23:41.428 { 00:23:41.428 "method": "bdev_iscsi_set_options", 00:23:41.428 "params": { 00:23:41.428 "timeout_sec": 30 00:23:41.428 } 00:23:41.428 }, 00:23:41.428 { 00:23:41.428 "method": "bdev_nvme_set_options", 00:23:41.428 "params": { 00:23:41.428 "action_on_timeout": "none", 00:23:41.428 "timeout_us": 0, 00:23:41.428 "timeout_admin_us": 0, 00:23:41.428 "keep_alive_timeout_ms": 10000, 00:23:41.428 "arbitration_burst": 0, 00:23:41.428 "low_priority_weight": 0, 00:23:41.428 "medium_priority_weight": 0, 00:23:41.428 "high_priority_weight": 0, 00:23:41.428 "nvme_adminq_poll_period_us": 10000, 00:23:41.428 "nvme_ioq_poll_period_us": 0, 00:23:41.428 "io_queue_requests": 512, 00:23:41.428 "delay_cmd_submit": true, 00:23:41.428 "transport_retry_count": 4, 00:23:41.428 "bdev_retry_count": 3, 00:23:41.428 "transport_ack_timeout": 0, 00:23:41.428 "ctrlr_loss_timeout_sec": 0, 00:23:41.428 "reconnect_delay_sec": 0, 00:23:41.428 "fast_io_fail_timeout_sec": 0, 00:23:41.428 "disable_auto_failback": false, 00:23:41.428 "generate_uuids": false, 00:23:41.428 "transport_tos": 0, 00:23:41.428 "nvme_error_stat": false, 00:23:41.428 "rdma_srq_size": 0, 00:23:41.428 "io_path_stat": false, 00:23:41.428 "allow_accel_sequence": false, 00:23:41.428 "rdma_max_cq_size": 0, 00:23:41.428 "rdma_cm_event_timeout_ms": 0, 00:23:41.428 "dhchap_digests": [ 00:23:41.428 "sha256", 00:23:41.429 "sha384", 00:23:41.429 "sha512" 00:23:41.429 ], 00:23:41.429 "dhchap_dhgroups": [ 00:23:41.429 "null", 00:23:41.429 "ffdhe2048", 00:23:41.429 "ffdhe3072", 00:23:41.429 "ffdhe4096", 00:23:41.429 "ffdhe6144", 00:23:41.429 "ffdhe8192" 00:23:41.429 ] 00:23:41.429 } 00:23:41.429 }, 00:23:41.429 { 00:23:41.429 "method": "bdev_nvme_attach_controller", 00:23:41.429 "params": { 00:23:41.429 "name": "TLSTEST", 00:23:41.429 "trtype": "TCP", 00:23:41.429 "adrfam": "IPv4", 00:23:41.429 "traddr": "10.0.0.2", 00:23:41.429 "trsvcid": "4420", 00:23:41.429 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.429 "prchk_reftag": false, 00:23:41.429 "prchk_guard": false, 00:23:41.429 "ctrlr_loss_timeout_sec": 0, 00:23:41.429 "reconnect_delay_sec": 0, 00:23:41.429 "fast_io_fail_timeout_sec": 0, 00:23:41.429 "psk": "/tmp/tmp.n1promfwBG", 00:23:41.429 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:41.429 "hdgst": false, 00:23:41.429 "ddgst": false 00:23:41.429 } 00:23:41.429 }, 00:23:41.429 { 00:23:41.429 "method": "bdev_nvme_set_hotplug", 00:23:41.429 "params": { 00:23:41.429 "period_us": 100000, 00:23:41.429 "enable": false 00:23:41.429 } 00:23:41.429 }, 00:23:41.429 { 00:23:41.429 "method": "bdev_wait_for_examine" 00:23:41.429 } 00:23:41.429 ] 00:23:41.429 }, 00:23:41.429 { 00:23:41.429 "subsystem": "nbd", 00:23:41.429 "config": [] 00:23:41.429 } 00:23:41.429 ] 00:23:41.429 }' 00:23:41.429 09:33:25 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 782435 00:23:41.429 09:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 782435 ']' 00:23:41.429 09:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 782435 00:23:41.429 09:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:41.429 09:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:41.429 09:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 782435 00:23:41.429 09:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:41.429 09:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:41.429 09:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 782435' 00:23:41.429 killing process with pid 782435 00:23:41.429 09:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 782435 00:23:41.429 Received shutdown signal, test time was about 10.000000 seconds 00:23:41.429 00:23:41.429 Latency(us) 00:23:41.429 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:41.429 =================================================================================================================== 00:23:41.429 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:41.429 [2024-07-14 09:33:25.695319] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:41.429 09:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 782435 00:23:41.687 09:33:25 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 782149 00:23:41.687 09:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 782149 ']' 00:23:41.687 09:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 782149 00:23:41.687 09:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:41.687 09:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:41.687 09:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 782149 00:23:41.687 09:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:41.687 09:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:41.687 09:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 782149' 00:23:41.687 killing process with pid 782149 00:23:41.687 09:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 782149 00:23:41.687 [2024-07-14 09:33:25.919523] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:41.687 09:33:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 782149 00:23:41.946 09:33:26 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:41.946 09:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:41.946 09:33:26 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:23:41.946 "subsystems": [ 00:23:41.946 { 00:23:41.946 "subsystem": "keyring", 00:23:41.946 "config": [] 00:23:41.946 }, 00:23:41.946 { 00:23:41.946 "subsystem": "iobuf", 00:23:41.946 "config": [ 00:23:41.946 { 00:23:41.946 "method": "iobuf_set_options", 00:23:41.946 "params": { 00:23:41.946 "small_pool_count": 8192, 00:23:41.946 "large_pool_count": 1024, 00:23:41.946 "small_bufsize": 8192, 00:23:41.946 "large_bufsize": 135168 00:23:41.946 } 00:23:41.946 } 00:23:41.946 ] 00:23:41.946 }, 00:23:41.946 { 00:23:41.946 "subsystem": "sock", 00:23:41.946 "config": [ 00:23:41.946 { 00:23:41.946 "method": "sock_set_default_impl", 00:23:41.946 "params": { 00:23:41.946 "impl_name": "posix" 00:23:41.946 } 00:23:41.946 }, 00:23:41.946 { 00:23:41.946 "method": "sock_impl_set_options", 00:23:41.946 "params": { 00:23:41.946 "impl_name": "ssl", 00:23:41.946 "recv_buf_size": 4096, 00:23:41.946 "send_buf_size": 4096, 00:23:41.946 "enable_recv_pipe": true, 00:23:41.946 "enable_quickack": false, 00:23:41.946 "enable_placement_id": 0, 00:23:41.946 "enable_zerocopy_send_server": true, 00:23:41.946 "enable_zerocopy_send_client": false, 00:23:41.946 "zerocopy_threshold": 0, 00:23:41.946 "tls_version": 0, 00:23:41.946 "enable_ktls": false 00:23:41.946 } 00:23:41.946 }, 00:23:41.946 { 00:23:41.946 "method": "sock_impl_set_options", 00:23:41.946 "params": { 00:23:41.946 "impl_name": "posix", 00:23:41.946 "recv_buf_size": 2097152, 00:23:41.946 "send_buf_size": 2097152, 00:23:41.946 "enable_recv_pipe": true, 00:23:41.946 "enable_quickack": false, 00:23:41.946 "enable_placement_id": 0, 00:23:41.946 "enable_zerocopy_send_server": true, 00:23:41.946 "enable_zerocopy_send_client": false, 00:23:41.946 "zerocopy_threshold": 0, 00:23:41.946 "tls_version": 0, 00:23:41.946 "enable_ktls": false 00:23:41.946 } 00:23:41.946 } 00:23:41.946 ] 00:23:41.946 }, 00:23:41.946 { 00:23:41.946 "subsystem": "vmd", 00:23:41.946 "config": [] 00:23:41.946 }, 00:23:41.946 { 00:23:41.946 "subsystem": "accel", 00:23:41.946 "config": [ 00:23:41.946 { 00:23:41.946 "method": "accel_set_options", 00:23:41.946 "params": { 00:23:41.946 "small_cache_size": 128, 00:23:41.946 "large_cache_size": 16, 00:23:41.946 "task_count": 2048, 00:23:41.946 "sequence_count": 2048, 00:23:41.946 "buf_count": 2048 00:23:41.946 } 00:23:41.946 } 00:23:41.946 ] 00:23:41.946 }, 00:23:41.946 { 00:23:41.946 "subsystem": "bdev", 00:23:41.946 "config": [ 00:23:41.946 { 00:23:41.946 "method": "bdev_set_options", 00:23:41.946 "params": { 00:23:41.946 "bdev_io_pool_size": 65535, 00:23:41.946 "bdev_io_cache_size": 256, 00:23:41.946 "bdev_auto_examine": true, 00:23:41.946 "iobuf_small_cache_size": 128, 00:23:41.946 "iobuf_large_cache_size": 16 00:23:41.946 } 00:23:41.946 }, 00:23:41.946 { 00:23:41.946 "method": "bdev_raid_set_options", 00:23:41.946 "params": { 00:23:41.946 "process_window_size_kb": 1024 00:23:41.946 } 00:23:41.946 }, 00:23:41.946 { 00:23:41.946 "method": "bdev_iscsi_set_options", 00:23:41.946 "params": { 00:23:41.946 "timeout_sec": 30 00:23:41.946 } 00:23:41.946 }, 00:23:41.946 { 00:23:41.946 "method": "bdev_nvme_set_options", 00:23:41.946 "params": { 00:23:41.946 "action_on_timeout": "none", 00:23:41.946 "timeout_us": 0, 00:23:41.946 "timeout_admin_us": 0, 00:23:41.946 "keep_alive_timeout_ms": 10000, 00:23:41.946 "arbitration_burst": 0, 00:23:41.946 "low_priority_weight": 0, 00:23:41.946 "medium_priority_weight": 0, 00:23:41.946 "high_priority_weight": 0, 00:23:41.946 "nvme_adminq_poll_period_us": 10000, 00:23:41.946 "nvme_ioq_poll_period_us": 0, 00:23:41.946 "io_queue_requests": 0, 00:23:41.946 "delay_cmd_submit": true, 00:23:41.946 "transport_retry_count": 4, 00:23:41.946 "bdev_retry_count": 3, 00:23:41.946 "transport_ack_timeout": 0, 00:23:41.946 "ctrlr_loss_timeout_sec": 0, 00:23:41.946 "reconnect_delay_sec": 0, 00:23:41.946 "fast_io_fail_timeout_sec": 0, 00:23:41.947 "disable_auto_failback": false, 00:23:41.947 "generate_uuids": false, 00:23:41.947 "transport_tos": 0, 00:23:41.947 "nvme_error_stat": false, 00:23:41.947 "rdma_srq_size": 0, 00:23:41.947 "io_path_stat": false, 00:23:41.947 "allow_accel_sequence": false, 00:23:41.947 "rdma_max_cq_size": 0, 00:23:41.947 "rdma_cm_event_timeout_ms": 0, 00:23:41.947 "dhchap_digests": [ 00:23:41.947 "sha256", 00:23:41.947 "sha384", 00:23:41.947 "sha512" 00:23:41.947 ], 00:23:41.947 "dhchap_dhgroups": [ 00:23:41.947 "null", 00:23:41.947 "ffdhe2048", 00:23:41.947 "ffdhe3072", 00:23:41.947 "ffdhe4096", 00:23:41.947 "ffdhe 09:33:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:41.947 6144", 00:23:41.947 "ffdhe8192" 00:23:41.947 ] 00:23:41.947 } 00:23:41.947 }, 00:23:41.947 { 00:23:41.947 "method": "bdev_nvme_set_hotplug", 00:23:41.947 "params": { 00:23:41.947 "period_us": 100000, 00:23:41.947 "enable": false 00:23:41.947 } 00:23:41.947 }, 00:23:41.947 { 00:23:41.947 "method": "bdev_malloc_create", 00:23:41.947 "params": { 00:23:41.947 "name": "malloc0", 00:23:41.947 "num_blocks": 8192, 00:23:41.947 "block_size": 4096, 00:23:41.947 "physical_block_size": 4096, 00:23:41.947 "uuid": "fd66a387-9489-4368-9e1b-cd885109bf4b", 00:23:41.947 "optimal_io_boundary": 0 00:23:41.947 } 00:23:41.947 }, 00:23:41.947 { 00:23:41.947 "method": "bdev_wait_for_examine" 00:23:41.947 } 00:23:41.947 ] 00:23:41.947 }, 00:23:41.947 { 00:23:41.947 "subsystem": "nbd", 00:23:41.947 "config": [] 00:23:41.947 }, 00:23:41.947 { 00:23:41.947 "subsystem": "scheduler", 00:23:41.947 "config": [ 00:23:41.947 { 00:23:41.947 "method": "framework_set_scheduler", 00:23:41.947 "params": { 00:23:41.947 "name": "static" 00:23:41.947 } 00:23:41.947 } 00:23:41.947 ] 00:23:41.947 }, 00:23:41.947 { 00:23:41.947 "subsystem": "nvmf", 00:23:41.947 "config": [ 00:23:41.947 { 00:23:41.947 "method": "nvmf_set_config", 00:23:41.947 "params": { 00:23:41.947 "discovery_filter": "match_any", 00:23:41.947 "admin_cmd_passthru": { 00:23:41.947 "identify_ctrlr": false 00:23:41.947 } 00:23:41.947 } 00:23:41.947 }, 00:23:41.947 { 00:23:41.947 "method": "nvmf_set_max_subsystems", 00:23:41.947 "params": { 00:23:41.947 "max_subsystems": 1024 00:23:41.947 } 00:23:41.947 }, 00:23:41.947 { 00:23:41.947 "method": "nvmf_set_crdt", 00:23:41.947 "params": { 00:23:41.947 "crdt1": 0, 00:23:41.947 "crdt2": 0, 00:23:41.947 "crdt3": 0 00:23:41.947 } 00:23:41.947 }, 00:23:41.947 { 00:23:41.947 "method": "nvmf_create_transport", 00:23:41.947 "params": { 00:23:41.947 "trtype": "TCP", 00:23:41.947 "max_queue_depth": 128, 00:23:41.947 "max_io_qpairs_per_ctrlr": 127, 00:23:41.947 "in_capsule_data_size": 4096, 00:23:41.947 "max_io_size": 131072, 00:23:41.947 "io_unit_size": 131072, 00:23:41.947 "max_aq_depth": 128, 00:23:41.947 "num_shared_buffers": 511, 00:23:41.947 "buf_cache_size": 4294967295, 00:23:41.947 "dif_insert_or_strip": false, 00:23:41.947 "zcopy": false, 00:23:41.947 "c2h_success": false, 00:23:41.947 "sock_priority": 0, 00:23:41.947 "abort_timeout_sec": 1, 00:23:41.947 "ack_timeout": 0, 00:23:41.947 "data_wr_pool_size": 0 00:23:41.947 } 00:23:41.947 }, 00:23:41.947 { 00:23:41.947 "method": "nvmf_create_subsystem", 00:23:41.947 "params": { 00:23:41.947 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.947 "allow_any_host": false, 00:23:41.947 "serial_number": "SPDK00000000000001", 00:23:41.947 "model_number": "SPDK bdev Controller", 00:23:41.947 "max_namespaces": 10, 00:23:41.947 "min_cntlid": 1, 00:23:41.947 "max_cntlid": 65519, 00:23:41.947 "ana_reporting": false 00:23:41.947 } 00:23:41.947 }, 00:23:41.947 { 00:23:41.947 "method": "nvmf_subsystem_add_host", 00:23:41.947 "params": { 00:23:41.947 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.947 "host": "nqn.2016-06.io.spdk:host1", 00:23:41.947 "psk": "/tmp/tmp.n1promfwBG" 00:23:41.947 } 00:23:41.947 }, 00:23:41.947 { 00:23:41.947 "method": "nvmf_subsystem_add_ns", 00:23:41.947 "params": { 00:23:41.947 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.947 "namespace": { 00:23:41.947 "nsid": 1, 00:23:41.947 "bdev_name": "malloc0", 00:23:41.947 "nguid": "FD66A387948943689E1BCD885109BF4B", 00:23:41.947 "uuid": "fd66a387-9489-4368-9e1b-cd885109bf4b", 00:23:41.947 "no_auto_visible": false 00:23:41.947 } 00:23:41.947 } 00:23:41.947 }, 00:23:41.947 { 00:23:41.947 "method": "nvmf_subsystem_add_listener", 00:23:41.947 "params": { 00:23:41.947 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.947 "listen_address": { 00:23:41.947 "trtype": "TCP", 00:23:41.947 "adrfam": "IPv4", 00:23:41.947 "traddr": "10.0.0.2", 00:23:41.947 "trsvcid": "4420" 00:23:41.947 }, 00:23:41.947 "secure_channel": true 00:23:41.947 } 00:23:41.947 } 00:23:41.947 ] 00:23:41.947 } 00:23:41.947 ] 00:23:41.947 }' 00:23:41.947 09:33:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:41.947 09:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=782597 00:23:41.947 09:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:41.947 09:33:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 782597 00:23:41.947 09:33:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 782597 ']' 00:23:41.947 09:33:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:41.947 09:33:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:41.947 09:33:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:41.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:41.947 09:33:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:41.947 09:33:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:41.947 [2024-07-14 09:33:26.205617] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:41.947 [2024-07-14 09:33:26.205697] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:41.947 EAL: No free 2048 kB hugepages reported on node 1 00:23:41.947 [2024-07-14 09:33:26.277559] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.947 [2024-07-14 09:33:26.373485] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:41.947 [2024-07-14 09:33:26.373543] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:41.947 [2024-07-14 09:33:26.373560] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:41.947 [2024-07-14 09:33:26.373573] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:41.947 [2024-07-14 09:33:26.373586] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:41.947 [2024-07-14 09:33:26.373668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:42.206 [2024-07-14 09:33:26.610740] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:42.206 [2024-07-14 09:33:26.626687] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:42.206 [2024-07-14 09:33:26.642742] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:42.206 [2024-07-14 09:33:26.651068] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:42.772 09:33:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:42.772 09:33:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:42.772 09:33:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:42.772 09:33:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:42.772 09:33:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:42.772 09:33:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:42.772 09:33:27 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=782738 00:23:42.772 09:33:27 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 782738 /var/tmp/bdevperf.sock 00:23:42.772 09:33:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 782738 ']' 00:23:42.772 09:33:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:42.772 09:33:27 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:42.772 09:33:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:42.772 09:33:27 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:23:42.772 "subsystems": [ 00:23:42.772 { 00:23:42.772 "subsystem": "keyring", 00:23:42.772 "config": [] 00:23:42.772 }, 00:23:42.772 { 00:23:42.772 "subsystem": "iobuf", 00:23:42.772 "config": [ 00:23:42.772 { 00:23:42.772 "method": "iobuf_set_options", 00:23:42.772 "params": { 00:23:42.772 "small_pool_count": 8192, 00:23:42.772 "large_pool_count": 1024, 00:23:42.772 "small_bufsize": 8192, 00:23:42.772 "large_bufsize": 135168 00:23:42.772 } 00:23:42.772 } 00:23:42.772 ] 00:23:42.772 }, 00:23:42.772 { 00:23:42.772 "subsystem": "sock", 00:23:42.772 "config": [ 00:23:42.772 { 00:23:42.772 "method": "sock_set_default_impl", 00:23:42.772 "params": { 00:23:42.772 "impl_name": "posix" 00:23:42.772 } 00:23:42.772 }, 00:23:42.772 { 00:23:42.772 "method": "sock_impl_set_options", 00:23:42.772 "params": { 00:23:42.772 "impl_name": "ssl", 00:23:42.772 "recv_buf_size": 4096, 00:23:42.772 "send_buf_size": 4096, 00:23:42.772 "enable_recv_pipe": true, 00:23:42.772 "enable_quickack": false, 00:23:42.772 "enable_placement_id": 0, 00:23:42.772 "enable_zerocopy_send_server": true, 00:23:42.772 "enable_zerocopy_send_client": false, 00:23:42.772 "zerocopy_threshold": 0, 00:23:42.772 "tls_version": 0, 00:23:42.772 "enable_ktls": false 00:23:42.772 } 00:23:42.772 }, 00:23:42.772 { 00:23:42.772 "method": "sock_impl_set_options", 00:23:42.772 "params": { 00:23:42.772 "impl_name": "posix", 00:23:42.772 "recv_buf_size": 2097152, 00:23:42.772 "send_buf_size": 2097152, 00:23:42.772 "enable_recv_pipe": true, 00:23:42.772 "enable_quickack": false, 00:23:42.772 "enable_placement_id": 0, 00:23:42.772 "enable_zerocopy_send_server": true, 00:23:42.772 "enable_zerocopy_send_client": false, 00:23:42.772 "zerocopy_threshold": 0, 00:23:42.772 "tls_version": 0, 00:23:42.772 "enable_ktls": false 00:23:42.772 } 00:23:42.772 } 00:23:42.772 ] 00:23:42.772 }, 00:23:42.772 { 00:23:42.772 "subsystem": "vmd", 00:23:42.772 "config": [] 00:23:42.772 }, 00:23:42.772 { 00:23:42.772 "subsystem": "accel", 00:23:42.772 "config": [ 00:23:42.772 { 00:23:42.772 "method": "accel_set_options", 00:23:42.772 "params": { 00:23:42.772 "small_cache_size": 128, 00:23:42.772 "large_cache_size": 16, 00:23:42.772 "task_count": 2048, 00:23:42.772 "sequence_count": 2048, 00:23:42.772 "buf_count": 2048 00:23:42.772 } 00:23:42.772 } 00:23:42.772 ] 00:23:42.772 }, 00:23:42.772 { 00:23:42.772 "subsystem": "bdev", 00:23:42.772 "config": [ 00:23:42.772 { 00:23:42.772 "method": "bdev_set_options", 00:23:42.772 "params": { 00:23:42.772 "bdev_io_pool_size": 65535, 00:23:42.772 "bdev_io_cache_size": 256, 00:23:42.772 "bdev_auto_examine": true, 00:23:42.772 "iobuf_small_cache_size": 128, 00:23:42.772 "iobuf_large_cache_size": 16 00:23:42.772 } 00:23:42.772 }, 00:23:42.772 { 00:23:42.772 "method": "bdev_raid_set_options", 00:23:42.772 "params": { 00:23:42.772 "process_window_size_kb": 1024 00:23:42.772 } 00:23:42.772 }, 00:23:42.772 { 00:23:42.772 "method": "bdev_iscsi_set_options", 00:23:42.772 "params": { 00:23:42.772 "timeout_sec": 30 00:23:42.772 } 00:23:42.772 }, 00:23:42.772 { 00:23:42.772 "method": "bdev_nvme_set_options", 00:23:42.772 "params": { 00:23:42.772 "action_on_timeout": "none", 00:23:42.772 "timeout_us": 0, 00:23:42.772 "timeout_admin_us": 0, 00:23:42.772 "keep_alive_timeout_ms": 10000, 00:23:42.772 "arbitration_burst": 0, 00:23:42.772 "low_priority_weight": 0, 00:23:42.772 "medium_priority_weight": 0, 00:23:42.772 "high_priority_weight": 0, 00:23:42.772 "nvme_adminq_poll_period_us": 10000, 00:23:42.772 "nvme_ioq_poll_period_us": 0, 00:23:42.772 "io_queue_requests": 512, 00:23:42.772 "delay_cmd_submit": true, 00:23:42.772 "transport_retry_count": 4, 00:23:42.772 "bdev_retry_count": 3, 00:23:42.772 "transport_ack_timeout": 0, 00:23:42.772 "ctrlr_loss_timeout_sec": 0, 00:23:42.772 "reconnect_delay_sec": 0, 00:23:42.772 "fast_io_fail_timeout_sec": 0, 00:23:42.772 "disable_auto_failback": false, 00:23:42.772 "generate_uuids": false, 00:23:42.772 "transport_tos": 0, 00:23:42.772 "nvme_error_stat": false, 00:23:42.772 "rdma_srq_size": 0, 00:23:42.772 "io_path_stat": false, 00:23:42.772 "allow_accel_sequence": false, 00:23:42.772 "rdma_max_cq_size": 0, 00:23:42.772 "rdma_cm_event_timeout_ms": 0, 00:23:42.772 "dhchap_digests": [ 00:23:42.772 "sha256", 00:23:42.772 "sha384", 00:23:42.772 "sha512" 00:23:42.772 ], 00:23:42.772 "dhchap_dhgroups": [ 00:23:42.772 "null", 00:23:42.772 "ffdhe2048", 00:23:42.772 "ffdhe3072", 00:23:42.772 "ffdhe4096", 00:23:42.772 "ffdhe6144", 00:23:42.772 "ffdhe8192" 00:23:42.772 ] 00:23:42.772 } 00:23:42.772 }, 00:23:42.772 { 00:23:42.772 "method": "bdev_nvme_attach_controller", 00:23:42.772 "params": { 00:23:42.772 "name": "TLSTEST", 00:23:42.772 "trtype": "TCP", 00:23:42.772 "adrfam": "IPv4", 00:23:42.772 "traddr": "10.0.0.2", 00:23:42.772 "trsvcid": "4420", 00:23:42.772 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.772 "prchk_reftag": false, 00:23:42.772 "prchk_guard": false, 00:23:42.772 "ctrlr_loss_timeout_sec": 0, 00:23:42.772 "reconnect_delay_sec": 0, 00:23:42.772 "fast_io_fail_timeout_sec": 0, 00:23:42.772 "psk": "/tmp/tmp.n1promfwBG", 00:23:42.772 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:42.772 "hdgst": false, 00:23:42.772 "ddgst": false 00:23:42.772 } 00:23:42.772 }, 00:23:42.772 { 00:23:42.772 "method": "bdev_nvme_set_hotplug", 00:23:42.772 "params": { 00:23:42.772 "period_us": 100000, 00:23:42.772 "enable": false 00:23:42.772 } 00:23:42.772 }, 00:23:42.772 { 00:23:42.772 "method": "bdev_wait_for_examine" 00:23:42.772 } 00:23:42.772 ] 00:23:42.772 }, 00:23:42.772 { 00:23:42.772 "subsystem": "nbd", 00:23:42.772 "config": [] 00:23:42.772 } 00:23:42.772 ] 00:23:42.772 }' 00:23:42.772 09:33:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:42.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:42.772 09:33:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:42.772 09:33:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:42.772 [2024-07-14 09:33:27.211486] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:42.772 [2024-07-14 09:33:27.211563] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid782738 ] 00:23:43.031 EAL: No free 2048 kB hugepages reported on node 1 00:23:43.031 [2024-07-14 09:33:27.269791] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.031 [2024-07-14 09:33:27.358189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:43.289 [2024-07-14 09:33:27.525431] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:43.289 [2024-07-14 09:33:27.525567] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:43.854 09:33:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:43.854 09:33:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:43.854 09:33:28 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:44.112 Running I/O for 10 seconds... 00:23:54.082 00:23:54.082 Latency(us) 00:23:54.082 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.082 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:54.082 Verification LBA range: start 0x0 length 0x2000 00:23:54.082 TLSTESTn1 : 10.07 1429.01 5.58 0.00 0.00 89306.83 7136.14 121168.78 00:23:54.082 =================================================================================================================== 00:23:54.082 Total : 1429.01 5.58 0.00 0.00 89306.83 7136.14 121168.78 00:23:54.082 0 00:23:54.082 09:33:38 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:54.082 09:33:38 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 782738 00:23:54.082 09:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 782738 ']' 00:23:54.082 09:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 782738 00:23:54.082 09:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:54.082 09:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:54.082 09:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 782738 00:23:54.082 09:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:54.082 09:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:54.082 09:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 782738' 00:23:54.082 killing process with pid 782738 00:23:54.082 09:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 782738 00:23:54.082 Received shutdown signal, test time was about 10.000000 seconds 00:23:54.082 00:23:54.082 Latency(us) 00:23:54.082 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.082 =================================================================================================================== 00:23:54.082 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:54.082 [2024-07-14 09:33:38.474245] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:54.082 09:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 782738 00:23:54.339 09:33:38 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 782597 00:23:54.339 09:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 782597 ']' 00:23:54.339 09:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 782597 00:23:54.339 09:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:54.339 09:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:54.339 09:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 782597 00:23:54.339 09:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:54.339 09:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:54.339 09:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 782597' 00:23:54.339 killing process with pid 782597 00:23:54.339 09:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 782597 00:23:54.339 [2024-07-14 09:33:38.731397] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:54.339 09:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 782597 00:23:54.596 09:33:38 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:23:54.596 09:33:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:54.596 09:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:54.596 09:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.596 09:33:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=784195 00:23:54.596 09:33:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:54.596 09:33:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 784195 00:23:54.596 09:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 784195 ']' 00:23:54.596 09:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:54.596 09:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:54.596 09:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:54.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:54.596 09:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:54.596 09:33:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.596 [2024-07-14 09:33:39.015480] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:54.596 [2024-07-14 09:33:39.015557] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:54.596 EAL: No free 2048 kB hugepages reported on node 1 00:23:54.853 [2024-07-14 09:33:39.080327] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.853 [2024-07-14 09:33:39.165217] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:54.853 [2024-07-14 09:33:39.165273] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:54.853 [2024-07-14 09:33:39.165302] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:54.853 [2024-07-14 09:33:39.165314] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:54.853 [2024-07-14 09:33:39.165324] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:54.853 [2024-07-14 09:33:39.165351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.853 09:33:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:54.853 09:33:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:54.853 09:33:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:54.853 09:33:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:54.853 09:33:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.853 09:33:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:54.853 09:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.n1promfwBG 00:23:54.853 09:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.n1promfwBG 00:23:54.853 09:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:55.418 [2024-07-14 09:33:39.574936] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:55.418 09:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:55.675 09:33:39 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:55.932 [2024-07-14 09:33:40.152471] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:55.932 [2024-07-14 09:33:40.152706] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:55.932 09:33:40 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:56.189 malloc0 00:23:56.189 09:33:40 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:56.446 09:33:40 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.n1promfwBG 00:23:56.704 [2024-07-14 09:33:40.942263] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:56.704 09:33:40 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=784365 00:23:56.704 09:33:40 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:56.704 09:33:40 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:56.704 09:33:40 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 784365 /var/tmp/bdevperf.sock 00:23:56.704 09:33:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 784365 ']' 00:23:56.704 09:33:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:56.704 09:33:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:56.704 09:33:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:56.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:56.704 09:33:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:56.704 09:33:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.704 [2024-07-14 09:33:41.000186] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:56.704 [2024-07-14 09:33:41.000254] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid784365 ] 00:23:56.704 EAL: No free 2048 kB hugepages reported on node 1 00:23:56.704 [2024-07-14 09:33:41.063255] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.704 [2024-07-14 09:33:41.151572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:56.961 09:33:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:56.961 09:33:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:56.961 09:33:41 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.n1promfwBG 00:23:57.219 09:33:41 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:57.478 [2024-07-14 09:33:41.727042] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:57.478 nvme0n1 00:23:57.478 09:33:41 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:57.478 Running I/O for 1 seconds... 00:23:58.848 00:23:58.848 Latency(us) 00:23:58.848 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:58.848 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:58.848 Verification LBA range: start 0x0 length 0x2000 00:23:58.848 nvme0n1 : 1.07 1521.98 5.95 0.00 0.00 81815.31 8592.50 127382.57 00:23:58.848 =================================================================================================================== 00:23:58.848 Total : 1521.98 5.95 0.00 0.00 81815.31 8592.50 127382.57 00:23:58.848 0 00:23:58.848 09:33:43 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 784365 00:23:58.848 09:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 784365 ']' 00:23:58.848 09:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 784365 00:23:58.848 09:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:58.848 09:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:58.848 09:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 784365 00:23:58.848 09:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:58.848 09:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:58.848 09:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 784365' 00:23:58.848 killing process with pid 784365 00:23:58.848 09:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 784365 00:23:58.848 Received shutdown signal, test time was about 1.000000 seconds 00:23:58.848 00:23:58.848 Latency(us) 00:23:58.848 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:58.848 =================================================================================================================== 00:23:58.848 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:58.848 09:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 784365 00:23:58.848 09:33:43 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 784195 00:23:58.848 09:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 784195 ']' 00:23:58.848 09:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 784195 00:23:58.848 09:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:58.848 09:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:58.848 09:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 784195 00:23:59.106 09:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:59.106 09:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:59.106 09:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 784195' 00:23:59.106 killing process with pid 784195 00:23:59.106 09:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 784195 00:23:59.106 [2024-07-14 09:33:43.301842] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:59.106 09:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 784195 00:23:59.106 09:33:43 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:23:59.106 09:33:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:59.106 09:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:59.106 09:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:59.364 09:33:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=784755 00:23:59.364 09:33:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:59.364 09:33:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 784755 00:23:59.364 09:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 784755 ']' 00:23:59.364 09:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:59.364 09:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:59.364 09:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:59.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:59.364 09:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:59.364 09:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:59.364 [2024-07-14 09:33:43.607368] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:59.364 [2024-07-14 09:33:43.607455] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:59.364 EAL: No free 2048 kB hugepages reported on node 1 00:23:59.364 [2024-07-14 09:33:43.671736] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.364 [2024-07-14 09:33:43.758577] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:59.364 [2024-07-14 09:33:43.758659] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:59.364 [2024-07-14 09:33:43.758673] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:59.364 [2024-07-14 09:33:43.758685] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:59.364 [2024-07-14 09:33:43.758709] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:59.364 [2024-07-14 09:33:43.758739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:59.622 09:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:59.622 09:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:59.622 09:33:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:59.622 09:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:59.622 09:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:59.622 09:33:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:59.622 09:33:43 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:23:59.622 09:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.622 09:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:59.622 [2024-07-14 09:33:43.908958] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:59.622 malloc0 00:23:59.622 [2024-07-14 09:33:43.941626] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:59.622 [2024-07-14 09:33:43.941932] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:59.622 09:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.622 09:33:43 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=784792 00:23:59.622 09:33:43 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 784792 /var/tmp/bdevperf.sock 00:23:59.622 09:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 784792 ']' 00:23:59.622 09:33:43 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:59.622 09:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:59.622 09:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:59.622 09:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:59.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:59.622 09:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:59.622 09:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:59.622 [2024-07-14 09:33:44.013688] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:59.622 [2024-07-14 09:33:44.013767] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid784792 ] 00:23:59.622 EAL: No free 2048 kB hugepages reported on node 1 00:23:59.622 [2024-07-14 09:33:44.073840] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.879 [2024-07-14 09:33:44.159475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:59.879 09:33:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:59.879 09:33:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:59.879 09:33:44 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.n1promfwBG 00:24:00.136 09:33:44 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:00.394 [2024-07-14 09:33:44.815838] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:00.652 nvme0n1 00:24:00.652 09:33:44 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:00.652 Running I/O for 1 seconds... 00:24:02.023 00:24:02.023 Latency(us) 00:24:02.023 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:02.023 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:02.023 Verification LBA range: start 0x0 length 0x2000 00:24:02.023 nvme0n1 : 1.07 1462.05 5.71 0.00 0.00 85073.74 6359.42 125829.12 00:24:02.023 =================================================================================================================== 00:24:02.023 Total : 1462.05 5.71 0.00 0.00 85073.74 6359.42 125829.12 00:24:02.023 0 00:24:02.023 09:33:46 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:24:02.023 09:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.023 09:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:02.023 09:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.023 09:33:46 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:24:02.023 "subsystems": [ 00:24:02.023 { 00:24:02.023 "subsystem": "keyring", 00:24:02.023 "config": [ 00:24:02.023 { 00:24:02.023 "method": "keyring_file_add_key", 00:24:02.023 "params": { 00:24:02.023 "name": "key0", 00:24:02.023 "path": "/tmp/tmp.n1promfwBG" 00:24:02.023 } 00:24:02.023 } 00:24:02.023 ] 00:24:02.023 }, 00:24:02.023 { 00:24:02.023 "subsystem": "iobuf", 00:24:02.023 "config": [ 00:24:02.023 { 00:24:02.023 "method": "iobuf_set_options", 00:24:02.023 "params": { 00:24:02.023 "small_pool_count": 8192, 00:24:02.023 "large_pool_count": 1024, 00:24:02.023 "small_bufsize": 8192, 00:24:02.023 "large_bufsize": 135168 00:24:02.023 } 00:24:02.023 } 00:24:02.023 ] 00:24:02.023 }, 00:24:02.023 { 00:24:02.023 "subsystem": "sock", 00:24:02.023 "config": [ 00:24:02.023 { 00:24:02.023 "method": "sock_set_default_impl", 00:24:02.023 "params": { 00:24:02.023 "impl_name": "posix" 00:24:02.023 } 00:24:02.023 }, 00:24:02.023 { 00:24:02.023 "method": "sock_impl_set_options", 00:24:02.023 "params": { 00:24:02.023 "impl_name": "ssl", 00:24:02.023 "recv_buf_size": 4096, 00:24:02.023 "send_buf_size": 4096, 00:24:02.023 "enable_recv_pipe": true, 00:24:02.023 "enable_quickack": false, 00:24:02.023 "enable_placement_id": 0, 00:24:02.023 "enable_zerocopy_send_server": true, 00:24:02.023 "enable_zerocopy_send_client": false, 00:24:02.023 "zerocopy_threshold": 0, 00:24:02.023 "tls_version": 0, 00:24:02.023 "enable_ktls": false 00:24:02.023 } 00:24:02.023 }, 00:24:02.023 { 00:24:02.023 "method": "sock_impl_set_options", 00:24:02.023 "params": { 00:24:02.023 "impl_name": "posix", 00:24:02.023 "recv_buf_size": 2097152, 00:24:02.023 "send_buf_size": 2097152, 00:24:02.023 "enable_recv_pipe": true, 00:24:02.023 "enable_quickack": false, 00:24:02.023 "enable_placement_id": 0, 00:24:02.023 "enable_zerocopy_send_server": true, 00:24:02.023 "enable_zerocopy_send_client": false, 00:24:02.023 "zerocopy_threshold": 0, 00:24:02.023 "tls_version": 0, 00:24:02.023 "enable_ktls": false 00:24:02.023 } 00:24:02.023 } 00:24:02.023 ] 00:24:02.023 }, 00:24:02.023 { 00:24:02.023 "subsystem": "vmd", 00:24:02.023 "config": [] 00:24:02.023 }, 00:24:02.023 { 00:24:02.023 "subsystem": "accel", 00:24:02.023 "config": [ 00:24:02.023 { 00:24:02.023 "method": "accel_set_options", 00:24:02.023 "params": { 00:24:02.023 "small_cache_size": 128, 00:24:02.023 "large_cache_size": 16, 00:24:02.023 "task_count": 2048, 00:24:02.023 "sequence_count": 2048, 00:24:02.023 "buf_count": 2048 00:24:02.023 } 00:24:02.023 } 00:24:02.023 ] 00:24:02.023 }, 00:24:02.023 { 00:24:02.023 "subsystem": "bdev", 00:24:02.023 "config": [ 00:24:02.023 { 00:24:02.023 "method": "bdev_set_options", 00:24:02.023 "params": { 00:24:02.023 "bdev_io_pool_size": 65535, 00:24:02.023 "bdev_io_cache_size": 256, 00:24:02.024 "bdev_auto_examine": true, 00:24:02.024 "iobuf_small_cache_size": 128, 00:24:02.024 "iobuf_large_cache_size": 16 00:24:02.024 } 00:24:02.024 }, 00:24:02.024 { 00:24:02.024 "method": "bdev_raid_set_options", 00:24:02.024 "params": { 00:24:02.024 "process_window_size_kb": 1024 00:24:02.024 } 00:24:02.024 }, 00:24:02.024 { 00:24:02.024 "method": "bdev_iscsi_set_options", 00:24:02.024 "params": { 00:24:02.024 "timeout_sec": 30 00:24:02.024 } 00:24:02.024 }, 00:24:02.024 { 00:24:02.024 "method": "bdev_nvme_set_options", 00:24:02.024 "params": { 00:24:02.024 "action_on_timeout": "none", 00:24:02.024 "timeout_us": 0, 00:24:02.024 "timeout_admin_us": 0, 00:24:02.024 "keep_alive_timeout_ms": 10000, 00:24:02.024 "arbitration_burst": 0, 00:24:02.024 "low_priority_weight": 0, 00:24:02.024 "medium_priority_weight": 0, 00:24:02.024 "high_priority_weight": 0, 00:24:02.024 "nvme_adminq_poll_period_us": 10000, 00:24:02.024 "nvme_ioq_poll_period_us": 0, 00:24:02.024 "io_queue_requests": 0, 00:24:02.024 "delay_cmd_submit": true, 00:24:02.024 "transport_retry_count": 4, 00:24:02.024 "bdev_retry_count": 3, 00:24:02.024 "transport_ack_timeout": 0, 00:24:02.024 "ctrlr_loss_timeout_sec": 0, 00:24:02.024 "reconnect_delay_sec": 0, 00:24:02.024 "fast_io_fail_timeout_sec": 0, 00:24:02.024 "disable_auto_failback": false, 00:24:02.024 "generate_uuids": false, 00:24:02.024 "transport_tos": 0, 00:24:02.024 "nvme_error_stat": false, 00:24:02.024 "rdma_srq_size": 0, 00:24:02.024 "io_path_stat": false, 00:24:02.024 "allow_accel_sequence": false, 00:24:02.024 "rdma_max_cq_size": 0, 00:24:02.024 "rdma_cm_event_timeout_ms": 0, 00:24:02.024 "dhchap_digests": [ 00:24:02.024 "sha256", 00:24:02.024 "sha384", 00:24:02.024 "sha512" 00:24:02.024 ], 00:24:02.024 "dhchap_dhgroups": [ 00:24:02.024 "null", 00:24:02.024 "ffdhe2048", 00:24:02.024 "ffdhe3072", 00:24:02.024 "ffdhe4096", 00:24:02.024 "ffdhe6144", 00:24:02.024 "ffdhe8192" 00:24:02.024 ] 00:24:02.024 } 00:24:02.024 }, 00:24:02.024 { 00:24:02.024 "method": "bdev_nvme_set_hotplug", 00:24:02.024 "params": { 00:24:02.024 "period_us": 100000, 00:24:02.024 "enable": false 00:24:02.024 } 00:24:02.024 }, 00:24:02.024 { 00:24:02.024 "method": "bdev_malloc_create", 00:24:02.024 "params": { 00:24:02.024 "name": "malloc0", 00:24:02.024 "num_blocks": 8192, 00:24:02.024 "block_size": 4096, 00:24:02.024 "physical_block_size": 4096, 00:24:02.024 "uuid": "a5078f15-9318-4c61-b957-498daf0fe2bf", 00:24:02.024 "optimal_io_boundary": 0 00:24:02.024 } 00:24:02.024 }, 00:24:02.024 { 00:24:02.024 "method": "bdev_wait_for_examine" 00:24:02.024 } 00:24:02.024 ] 00:24:02.024 }, 00:24:02.024 { 00:24:02.024 "subsystem": "nbd", 00:24:02.024 "config": [] 00:24:02.024 }, 00:24:02.024 { 00:24:02.024 "subsystem": "scheduler", 00:24:02.024 "config": [ 00:24:02.024 { 00:24:02.024 "method": "framework_set_scheduler", 00:24:02.024 "params": { 00:24:02.024 "name": "static" 00:24:02.024 } 00:24:02.024 } 00:24:02.024 ] 00:24:02.024 }, 00:24:02.024 { 00:24:02.024 "subsystem": "nvmf", 00:24:02.024 "config": [ 00:24:02.024 { 00:24:02.024 "method": "nvmf_set_config", 00:24:02.024 "params": { 00:24:02.024 "discovery_filter": "match_any", 00:24:02.024 "admin_cmd_passthru": { 00:24:02.024 "identify_ctrlr": false 00:24:02.024 } 00:24:02.024 } 00:24:02.024 }, 00:24:02.024 { 00:24:02.024 "method": "nvmf_set_max_subsystems", 00:24:02.024 "params": { 00:24:02.024 "max_subsystems": 1024 00:24:02.024 } 00:24:02.024 }, 00:24:02.024 { 00:24:02.024 "method": "nvmf_set_crdt", 00:24:02.024 "params": { 00:24:02.024 "crdt1": 0, 00:24:02.024 "crdt2": 0, 00:24:02.024 "crdt3": 0 00:24:02.024 } 00:24:02.024 }, 00:24:02.024 { 00:24:02.024 "method": "nvmf_create_transport", 00:24:02.024 "params": { 00:24:02.024 "trtype": "TCP", 00:24:02.024 "max_queue_depth": 128, 00:24:02.024 "max_io_qpairs_per_ctrlr": 127, 00:24:02.024 "in_capsule_data_size": 4096, 00:24:02.024 "max_io_size": 131072, 00:24:02.024 "io_unit_size": 131072, 00:24:02.024 "max_aq_depth": 128, 00:24:02.024 "num_shared_buffers": 511, 00:24:02.024 "buf_cache_size": 4294967295, 00:24:02.024 "dif_insert_or_strip": false, 00:24:02.024 "zcopy": false, 00:24:02.024 "c2h_success": false, 00:24:02.024 "sock_priority": 0, 00:24:02.024 "abort_timeout_sec": 1, 00:24:02.024 "ack_timeout": 0, 00:24:02.024 "data_wr_pool_size": 0 00:24:02.024 } 00:24:02.024 }, 00:24:02.024 { 00:24:02.024 "method": "nvmf_create_subsystem", 00:24:02.024 "params": { 00:24:02.024 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.024 "allow_any_host": false, 00:24:02.024 "serial_number": "00000000000000000000", 00:24:02.024 "model_number": "SPDK bdev Controller", 00:24:02.024 "max_namespaces": 32, 00:24:02.024 "min_cntlid": 1, 00:24:02.024 "max_cntlid": 65519, 00:24:02.024 "ana_reporting": false 00:24:02.024 } 00:24:02.024 }, 00:24:02.024 { 00:24:02.024 "method": "nvmf_subsystem_add_host", 00:24:02.024 "params": { 00:24:02.024 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.024 "host": "nqn.2016-06.io.spdk:host1", 00:24:02.024 "psk": "key0" 00:24:02.024 } 00:24:02.024 }, 00:24:02.024 { 00:24:02.024 "method": "nvmf_subsystem_add_ns", 00:24:02.024 "params": { 00:24:02.024 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.024 "namespace": { 00:24:02.024 "nsid": 1, 00:24:02.024 "bdev_name": "malloc0", 00:24:02.024 "nguid": "A5078F1593184C61B957498DAF0FE2BF", 00:24:02.024 "uuid": "a5078f15-9318-4c61-b957-498daf0fe2bf", 00:24:02.024 "no_auto_visible": false 00:24:02.024 } 00:24:02.024 } 00:24:02.024 }, 00:24:02.024 { 00:24:02.024 "method": "nvmf_subsystem_add_listener", 00:24:02.024 "params": { 00:24:02.024 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.024 "listen_address": { 00:24:02.024 "trtype": "TCP", 00:24:02.024 "adrfam": "IPv4", 00:24:02.024 "traddr": "10.0.0.2", 00:24:02.024 "trsvcid": "4420" 00:24:02.024 }, 00:24:02.024 "secure_channel": true 00:24:02.024 } 00:24:02.024 } 00:24:02.024 ] 00:24:02.024 } 00:24:02.024 ] 00:24:02.024 }' 00:24:02.024 09:33:46 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:02.282 09:33:46 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:24:02.282 "subsystems": [ 00:24:02.282 { 00:24:02.282 "subsystem": "keyring", 00:24:02.282 "config": [ 00:24:02.282 { 00:24:02.282 "method": "keyring_file_add_key", 00:24:02.282 "params": { 00:24:02.282 "name": "key0", 00:24:02.282 "path": "/tmp/tmp.n1promfwBG" 00:24:02.282 } 00:24:02.282 } 00:24:02.282 ] 00:24:02.282 }, 00:24:02.282 { 00:24:02.282 "subsystem": "iobuf", 00:24:02.282 "config": [ 00:24:02.282 { 00:24:02.282 "method": "iobuf_set_options", 00:24:02.282 "params": { 00:24:02.282 "small_pool_count": 8192, 00:24:02.282 "large_pool_count": 1024, 00:24:02.282 "small_bufsize": 8192, 00:24:02.282 "large_bufsize": 135168 00:24:02.282 } 00:24:02.282 } 00:24:02.282 ] 00:24:02.282 }, 00:24:02.282 { 00:24:02.282 "subsystem": "sock", 00:24:02.282 "config": [ 00:24:02.282 { 00:24:02.282 "method": "sock_set_default_impl", 00:24:02.282 "params": { 00:24:02.282 "impl_name": "posix" 00:24:02.282 } 00:24:02.282 }, 00:24:02.282 { 00:24:02.282 "method": "sock_impl_set_options", 00:24:02.282 "params": { 00:24:02.282 "impl_name": "ssl", 00:24:02.282 "recv_buf_size": 4096, 00:24:02.282 "send_buf_size": 4096, 00:24:02.282 "enable_recv_pipe": true, 00:24:02.282 "enable_quickack": false, 00:24:02.282 "enable_placement_id": 0, 00:24:02.282 "enable_zerocopy_send_server": true, 00:24:02.282 "enable_zerocopy_send_client": false, 00:24:02.282 "zerocopy_threshold": 0, 00:24:02.282 "tls_version": 0, 00:24:02.282 "enable_ktls": false 00:24:02.282 } 00:24:02.282 }, 00:24:02.282 { 00:24:02.282 "method": "sock_impl_set_options", 00:24:02.282 "params": { 00:24:02.282 "impl_name": "posix", 00:24:02.282 "recv_buf_size": 2097152, 00:24:02.282 "send_buf_size": 2097152, 00:24:02.282 "enable_recv_pipe": true, 00:24:02.282 "enable_quickack": false, 00:24:02.282 "enable_placement_id": 0, 00:24:02.282 "enable_zerocopy_send_server": true, 00:24:02.282 "enable_zerocopy_send_client": false, 00:24:02.282 "zerocopy_threshold": 0, 00:24:02.282 "tls_version": 0, 00:24:02.282 "enable_ktls": false 00:24:02.282 } 00:24:02.282 } 00:24:02.282 ] 00:24:02.282 }, 00:24:02.282 { 00:24:02.282 "subsystem": "vmd", 00:24:02.282 "config": [] 00:24:02.282 }, 00:24:02.282 { 00:24:02.282 "subsystem": "accel", 00:24:02.282 "config": [ 00:24:02.282 { 00:24:02.282 "method": "accel_set_options", 00:24:02.282 "params": { 00:24:02.282 "small_cache_size": 128, 00:24:02.282 "large_cache_size": 16, 00:24:02.282 "task_count": 2048, 00:24:02.282 "sequence_count": 2048, 00:24:02.282 "buf_count": 2048 00:24:02.282 } 00:24:02.282 } 00:24:02.282 ] 00:24:02.282 }, 00:24:02.282 { 00:24:02.282 "subsystem": "bdev", 00:24:02.282 "config": [ 00:24:02.282 { 00:24:02.282 "method": "bdev_set_options", 00:24:02.282 "params": { 00:24:02.282 "bdev_io_pool_size": 65535, 00:24:02.282 "bdev_io_cache_size": 256, 00:24:02.282 "bdev_auto_examine": true, 00:24:02.282 "iobuf_small_cache_size": 128, 00:24:02.282 "iobuf_large_cache_size": 16 00:24:02.282 } 00:24:02.282 }, 00:24:02.282 { 00:24:02.282 "method": "bdev_raid_set_options", 00:24:02.282 "params": { 00:24:02.282 "process_window_size_kb": 1024 00:24:02.282 } 00:24:02.282 }, 00:24:02.282 { 00:24:02.283 "method": "bdev_iscsi_set_options", 00:24:02.283 "params": { 00:24:02.283 "timeout_sec": 30 00:24:02.283 } 00:24:02.283 }, 00:24:02.283 { 00:24:02.283 "method": "bdev_nvme_set_options", 00:24:02.283 "params": { 00:24:02.283 "action_on_timeout": "none", 00:24:02.283 "timeout_us": 0, 00:24:02.283 "timeout_admin_us": 0, 00:24:02.283 "keep_alive_timeout_ms": 10000, 00:24:02.283 "arbitration_burst": 0, 00:24:02.283 "low_priority_weight": 0, 00:24:02.283 "medium_priority_weight": 0, 00:24:02.283 "high_priority_weight": 0, 00:24:02.283 "nvme_adminq_poll_period_us": 10000, 00:24:02.283 "nvme_ioq_poll_period_us": 0, 00:24:02.283 "io_queue_requests": 512, 00:24:02.283 "delay_cmd_submit": true, 00:24:02.283 "transport_retry_count": 4, 00:24:02.283 "bdev_retry_count": 3, 00:24:02.283 "transport_ack_timeout": 0, 00:24:02.283 "ctrlr_loss_timeout_sec": 0, 00:24:02.283 "reconnect_delay_sec": 0, 00:24:02.283 "fast_io_fail_timeout_sec": 0, 00:24:02.283 "disable_auto_failback": false, 00:24:02.283 "generate_uuids": false, 00:24:02.283 "transport_tos": 0, 00:24:02.283 "nvme_error_stat": false, 00:24:02.283 "rdma_srq_size": 0, 00:24:02.283 "io_path_stat": false, 00:24:02.283 "allow_accel_sequence": false, 00:24:02.283 "rdma_max_cq_size": 0, 00:24:02.283 "rdma_cm_event_timeout_ms": 0, 00:24:02.283 "dhchap_digests": [ 00:24:02.283 "sha256", 00:24:02.283 "sha384", 00:24:02.283 "sha512" 00:24:02.283 ], 00:24:02.283 "dhchap_dhgroups": [ 00:24:02.283 "null", 00:24:02.283 "ffdhe2048", 00:24:02.283 "ffdhe3072", 00:24:02.283 "ffdhe4096", 00:24:02.283 "ffdhe6144", 00:24:02.283 "ffdhe8192" 00:24:02.283 ] 00:24:02.283 } 00:24:02.283 }, 00:24:02.283 { 00:24:02.283 "method": "bdev_nvme_attach_controller", 00:24:02.283 "params": { 00:24:02.283 "name": "nvme0", 00:24:02.283 "trtype": "TCP", 00:24:02.283 "adrfam": "IPv4", 00:24:02.283 "traddr": "10.0.0.2", 00:24:02.283 "trsvcid": "4420", 00:24:02.283 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.283 "prchk_reftag": false, 00:24:02.283 "prchk_guard": false, 00:24:02.283 "ctrlr_loss_timeout_sec": 0, 00:24:02.283 "reconnect_delay_sec": 0, 00:24:02.283 "fast_io_fail_timeout_sec": 0, 00:24:02.283 "psk": "key0", 00:24:02.283 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:02.283 "hdgst": false, 00:24:02.283 "ddgst": false 00:24:02.283 } 00:24:02.283 }, 00:24:02.283 { 00:24:02.283 "method": "bdev_nvme_set_hotplug", 00:24:02.283 "params": { 00:24:02.283 "period_us": 100000, 00:24:02.283 "enable": false 00:24:02.283 } 00:24:02.283 }, 00:24:02.283 { 00:24:02.283 "method": "bdev_enable_histogram", 00:24:02.283 "params": { 00:24:02.283 "name": "nvme0n1", 00:24:02.283 "enable": true 00:24:02.283 } 00:24:02.283 }, 00:24:02.283 { 00:24:02.283 "method": "bdev_wait_for_examine" 00:24:02.283 } 00:24:02.283 ] 00:24:02.283 }, 00:24:02.283 { 00:24:02.283 "subsystem": "nbd", 00:24:02.283 "config": [] 00:24:02.283 } 00:24:02.283 ] 00:24:02.283 }' 00:24:02.283 09:33:46 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 784792 00:24:02.283 09:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 784792 ']' 00:24:02.283 09:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 784792 00:24:02.283 09:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:02.283 09:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:02.283 09:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 784792 00:24:02.283 09:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:02.283 09:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:02.283 09:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 784792' 00:24:02.283 killing process with pid 784792 00:24:02.283 09:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 784792 00:24:02.283 Received shutdown signal, test time was about 1.000000 seconds 00:24:02.283 00:24:02.283 Latency(us) 00:24:02.283 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:02.283 =================================================================================================================== 00:24:02.283 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:02.283 09:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 784792 00:24:02.541 09:33:46 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 784755 00:24:02.541 09:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 784755 ']' 00:24:02.541 09:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 784755 00:24:02.541 09:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:02.541 09:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:02.541 09:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 784755 00:24:02.541 09:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:02.541 09:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:02.541 09:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 784755' 00:24:02.541 killing process with pid 784755 00:24:02.541 09:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 784755 00:24:02.541 09:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 784755 00:24:02.799 09:33:47 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:24:02.799 09:33:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:02.800 09:33:47 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:24:02.800 "subsystems": [ 00:24:02.800 { 00:24:02.800 "subsystem": "keyring", 00:24:02.800 "config": [ 00:24:02.800 { 00:24:02.800 "method": "keyring_file_add_key", 00:24:02.800 "params": { 00:24:02.800 "name": "key0", 00:24:02.800 "path": "/tmp/tmp.n1promfwBG" 00:24:02.800 } 00:24:02.800 } 00:24:02.800 ] 00:24:02.800 }, 00:24:02.800 { 00:24:02.800 "subsystem": "iobuf", 00:24:02.800 "config": [ 00:24:02.800 { 00:24:02.800 "method": "iobuf_set_options", 00:24:02.800 "params": { 00:24:02.800 "small_pool_count": 8192, 00:24:02.800 "large_pool_count": 1024, 00:24:02.800 "small_bufsize": 8192, 00:24:02.800 "large_bufsize": 135168 00:24:02.800 } 00:24:02.800 } 00:24:02.800 ] 00:24:02.800 }, 00:24:02.800 { 00:24:02.800 "subsystem": "sock", 00:24:02.800 "config": [ 00:24:02.800 { 00:24:02.800 "method": "sock_set_default_impl", 00:24:02.800 "params": { 00:24:02.800 "impl_name": "posix" 00:24:02.800 } 00:24:02.800 }, 00:24:02.800 { 00:24:02.800 "method": "sock_impl_set_options", 00:24:02.800 "params": { 00:24:02.800 "impl_name": "ssl", 00:24:02.800 "recv_buf_size": 4096, 00:24:02.800 "send_buf_size": 4096, 00:24:02.800 "enable_recv_pipe": true, 00:24:02.800 "enable_quickack": false, 00:24:02.800 "enable_placement_id": 0, 00:24:02.800 "enable_zerocopy_send_server": true, 00:24:02.800 "enable_zerocopy_send_client": false, 00:24:02.800 "zerocopy_threshold": 0, 00:24:02.800 "tls_version": 0, 00:24:02.800 "enable_ktls": false 00:24:02.800 } 00:24:02.800 }, 00:24:02.800 { 00:24:02.800 "method": "sock_impl_set_options", 00:24:02.800 "params": { 00:24:02.800 "impl_name": "posix", 00:24:02.800 "recv_buf_size": 2097152, 00:24:02.800 "send_buf_size": 2097152, 00:24:02.800 "enable_recv_pipe": true, 00:24:02.800 "enable_quickack": false, 00:24:02.800 "enable_placement_id": 0, 00:24:02.800 "enable_zerocopy_send_server": true, 00:24:02.800 "enable_zerocopy_send_client": false, 00:24:02.800 "zerocopy_threshold": 0, 00:24:02.800 "tls_version": 0, 00:24:02.800 "enable_ktls": false 00:24:02.800 } 00:24:02.800 } 00:24:02.800 ] 00:24:02.800 }, 00:24:02.800 { 00:24:02.800 "subsystem": "vmd", 00:24:02.800 "config": [] 00:24:02.800 }, 00:24:02.800 { 00:24:02.800 "subsystem": "accel", 00:24:02.800 "config": [ 00:24:02.800 { 00:24:02.800 "method": "accel_set_options", 00:24:02.800 "params": { 00:24:02.800 "small_cache_size": 128, 00:24:02.800 "large_cache_size": 16, 00:24:02.800 "task_count": 2048, 00:24:02.800 "sequence_count": 2048, 00:24:02.800 "buf_count": 2048 00:24:02.800 } 00:24:02.800 } 00:24:02.800 ] 00:24:02.800 }, 00:24:02.800 { 00:24:02.800 "subsystem": "bdev", 00:24:02.800 "config": [ 00:24:02.800 { 00:24:02.800 "method": "bdev_set_options", 00:24:02.800 "params": { 00:24:02.800 "bdev_io_pool_size": 65535, 00:24:02.800 "bdev_io_cache_size": 256, 00:24:02.800 "bdev_auto_examine": true, 00:24:02.800 "iobuf_small_cache_size": 128, 00:24:02.800 "iobuf_large_cache_size": 16 00:24:02.800 } 00:24:02.800 }, 00:24:02.800 { 00:24:02.800 "method": "bdev_raid_set_options", 00:24:02.800 "params": { 00:24:02.800 "process_window_size_kb": 1024 00:24:02.800 } 00:24:02.800 }, 00:24:02.800 { 00:24:02.800 "method": "bdev_iscsi_set_options", 00:24:02.800 "params": { 00:24:02.800 "timeout_sec": 30 00:24:02.800 } 00:24:02.800 }, 00:24:02.800 { 00:24:02.800 "method": "bdev_nvme_set_options", 00:24:02.800 "params": { 00:24:02.800 "action_on_timeout": "none", 00:24:02.800 "timeout_us": 0, 00:24:02.800 "timeout_admin_us": 0, 00:24:02.800 "keep_alive_timeout_ms": 10000, 00:24:02.800 "arbitration_burst": 0, 00:24:02.800 "low_priority_weight": 0, 00:24:02.800 "medium_priority_weight": 0, 00:24:02.800 "high_priority_weight": 0, 00:24:02.800 "nvme_adminq_poll_period_us": 10000, 00:24:02.800 "nvme_ioq_poll_period_us": 0, 00:24:02.800 "io_queue_requests": 0, 00:24:02.800 "delay_cmd_submit": true, 00:24:02.800 "transport_retry_count": 4, 00:24:02.800 "bdev_retry_count": 3, 00:24:02.800 "transport_ack_timeout": 0, 00:24:02.800 "ctrlr_loss_timeout_sec": 0, 00:24:02.800 "reconnect_delay_sec": 0, 00:24:02.800 "fast_io_fail_timeout_sec": 0, 00:24:02.800 "disable_auto_failback": false, 00:24:02.800 "generate_uuids": false, 00:24:02.800 "transport_tos": 0, 00:24:02.800 "nvme_error_stat": false, 00:24:02.800 "rdma_srq_size": 0, 00:24:02.800 "io_path_stat": false, 00:24:02.800 "allow_accel_sequence": false, 00:24:02.800 "rdma_max_cq_size": 0, 00:24:02.800 "rdma_cm_event_timeout_ms": 0, 00:24:02.800 "dhchap_digests": [ 00:24:02.800 "sha256", 00:24:02.800 "sha384", 00:24:02.800 "sha512" 00:24:02.800 ], 00:24:02.800 "dhchap_dhgroups": [ 00:24:02.800 "null", 00:24:02.800 "ffdhe2048", 00:24:02.800 "ffdhe3072", 00:24:02.800 "ffdhe4096", 00:24:02.800 "ffdhe6144", 00:24:02.800 "ffdhe8192" 00:24:02.800 ] 00:24:02.800 } 00:24:02.800 }, 00:24:02.800 { 00:24:02.800 "method": "bdev_nvme_set_hotplug", 00:24:02.800 "params": { 00:24:02.800 "period_us": 100000, 00:24:02.800 "enable": false 00:24:02.800 } 00:24:02.800 }, 00:24:02.800 { 00:24:02.800 "method": "bdev_malloc_create", 00:24:02.800 "params": { 00:24:02.800 "name": "malloc0", 00:24:02.800 "num_blocks": 8192, 00:24:02.800 "block_size": 4096, 00:24:02.800 "physical_block_size": 4096, 00:24:02.800 "uuid": "a5078f15-9318-4c61-b957-498daf0fe2bf", 00:24:02.800 "optimal_io_boundary": 0 00:24:02.800 } 00:24:02.800 }, 00:24:02.800 { 00:24:02.800 "method": "bdev_wait_for_examine" 00:24:02.800 } 00:24:02.800 ] 00:24:02.800 }, 00:24:02.800 { 00:24:02.800 "subsystem": "nbd", 00:24:02.800 "config": [] 00:24:02.800 }, 00:24:02.800 { 00:24:02.800 "subsystem": "scheduler", 00:24:02.800 "config": [ 00:24:02.800 { 00:24:02.800 "method": "framework_set_scheduler", 00:24:02.800 "params": { 00:24:02.800 "name": "static" 00:24:02.800 } 00:24:02.800 } 00:24:02.800 ] 00:24:02.800 }, 00:24:02.800 { 00:24:02.800 "subsystem": "nvmf", 00:24:02.800 "config": [ 00:24:02.800 { 00:24:02.800 "method": "nvmf_set_config", 00:24:02.800 "params": { 00:24:02.800 "discovery_filter": "match_any", 00:24:02.800 "admin_cmd_passthru": { 00:24:02.800 "identify_ctrlr": false 00:24:02.800 } 00:24:02.800 } 00:24:02.800 }, 00:24:02.800 { 00:24:02.800 "method": "nvmf_set_max_subsystems", 00:24:02.800 "params": { 00:24:02.800 "max_subsystems": 1024 00:24:02.800 } 00:24:02.800 }, 00:24:02.800 { 00:24:02.800 "method": "nvmf_set_crdt", 00:24:02.800 "params": { 00:24:02.800 "crdt1": 0, 00:24:02.800 "crdt2": 0, 00:24:02.800 "crdt3": 0 00:24:02.800 } 00:24:02.800 }, 00:24:02.800 { 00:24:02.800 "method": "nvmf_create_transport", 00:24:02.800 "params": { 00:24:02.800 "trtype": "TCP", 00:24:02.800 "max_queue_depth": 128, 00:24:02.800 "max_io_qpairs_per_ctrlr": 127, 00:24:02.800 "in_capsule_data_size": 4096, 00:24:02.800 "max_io_size": 131072, 00:24:02.800 "io_unit_size": 131072, 00:24:02.800 "max_aq_depth": 128, 00:24:02.800 "num_shared_buffers": 511, 00:24:02.800 "buf_cache_size": 4294967295, 00:24:02.800 "dif_insert_or_strip": false, 00:24:02.800 "zcopy": false, 00:24:02.800 "c2h_success": false, 00:24:02.800 "sock_priority": 0, 00:24:02.800 "abort_timeout_sec": 1, 00:24:02.800 "ack_timeout": 0, 00:24:02.800 "data_wr_pool_size": 0 00:24:02.800 } 00:24:02.800 }, 00:24:02.800 { 00:24:02.800 "method": "nvmf_create_subsystem", 00:24:02.800 "params": { 00:24:02.800 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.800 "allow_any_host": false, 00:24:02.800 "serial_number": "00000000000000000000", 00:24:02.800 "model_number": "SPDK bdev Controller", 00:24:02.800 "max_namespaces": 32, 00:24:02.800 "min_cntlid": 1, 00:24:02.800 "max_cntlid": 65519, 00:24:02.800 "ana_reporting": false 00:24:02.800 } 00:24:02.800 }, 00:24:02.800 { 00:24:02.800 "method": "nvmf_subsystem_add_host", 00:24:02.800 "params": { 00:24:02.800 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.800 "host": "nqn.2016-06.io.spdk:host1", 00:24:02.800 "psk": "key0" 00:24:02.800 } 00:24:02.800 }, 00:24:02.800 { 00:24:02.800 "method": "nvmf_subsystem_add_ns", 00:24:02.800 "params": { 00:24:02.800 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.800 "namespace": { 00:24:02.800 "nsid": 1, 00:24:02.800 "bdev_name": "malloc0", 00:24:02.800 "nguid": "A5078F1593184C61B957498DAF0FE2BF", 00:24:02.800 "uuid": "a5078f15-9318-4c61-b957-498daf0fe2bf", 00:24:02.800 "no_auto_visible": false 00:24:02.800 } 00:24:02.800 } 00:24:02.800 }, 00:24:02.800 { 00:24:02.800 "method": "nvmf_subsystem_add_listener", 00:24:02.800 "params": { 00:24:02.800 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.800 "listen_address": { 00:24:02.800 "trtype": "TCP", 00:24:02.800 "adrfam": "IPv4", 00:24:02.800 "traddr": "10.0.0.2", 00:24:02.800 "trsvcid": "4420" 00:24:02.800 }, 00:24:02.800 "secure_channel": true 00:24:02.800 } 00:24:02.800 } 00:24:02.800 ] 00:24:02.801 } 00:24:02.801 ] 00:24:02.801 }' 00:24:02.801 09:33:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:02.801 09:33:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:02.801 09:33:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=785197 00:24:02.801 09:33:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:02.801 09:33:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 785197 00:24:02.801 09:33:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 785197 ']' 00:24:02.801 09:33:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:02.801 09:33:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:02.801 09:33:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:02.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:02.801 09:33:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:02.801 09:33:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:02.801 [2024-07-14 09:33:47.164904] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:24:02.801 [2024-07-14 09:33:47.164984] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:02.801 EAL: No free 2048 kB hugepages reported on node 1 00:24:02.801 [2024-07-14 09:33:47.231361] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.058 [2024-07-14 09:33:47.326995] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:03.058 [2024-07-14 09:33:47.327061] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:03.058 [2024-07-14 09:33:47.327090] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:03.058 [2024-07-14 09:33:47.327103] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:03.058 [2024-07-14 09:33:47.327113] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:03.058 [2024-07-14 09:33:47.327218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.316 [2024-07-14 09:33:47.572581] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:03.316 [2024-07-14 09:33:47.604591] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:03.316 [2024-07-14 09:33:47.617078] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:03.880 09:33:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:03.880 09:33:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:03.880 09:33:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:03.880 09:33:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:03.880 09:33:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:03.880 09:33:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:03.880 09:33:48 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=785346 00:24:03.880 09:33:48 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 785346 /var/tmp/bdevperf.sock 00:24:03.880 09:33:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 785346 ']' 00:24:03.880 09:33:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:03.880 09:33:48 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:03.880 09:33:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:03.880 09:33:48 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:24:03.880 "subsystems": [ 00:24:03.880 { 00:24:03.880 "subsystem": "keyring", 00:24:03.880 "config": [ 00:24:03.880 { 00:24:03.880 "method": "keyring_file_add_key", 00:24:03.880 "params": { 00:24:03.880 "name": "key0", 00:24:03.880 "path": "/tmp/tmp.n1promfwBG" 00:24:03.880 } 00:24:03.880 } 00:24:03.880 ] 00:24:03.880 }, 00:24:03.880 { 00:24:03.880 "subsystem": "iobuf", 00:24:03.880 "config": [ 00:24:03.880 { 00:24:03.880 "method": "iobuf_set_options", 00:24:03.880 "params": { 00:24:03.880 "small_pool_count": 8192, 00:24:03.880 "large_pool_count": 1024, 00:24:03.880 "small_bufsize": 8192, 00:24:03.880 "large_bufsize": 135168 00:24:03.880 } 00:24:03.880 } 00:24:03.880 ] 00:24:03.880 }, 00:24:03.880 { 00:24:03.880 "subsystem": "sock", 00:24:03.880 "config": [ 00:24:03.880 { 00:24:03.880 "method": "sock_set_default_impl", 00:24:03.880 "params": { 00:24:03.880 "impl_name": "posix" 00:24:03.880 } 00:24:03.880 }, 00:24:03.880 { 00:24:03.880 "method": "sock_impl_set_options", 00:24:03.880 "params": { 00:24:03.880 "impl_name": "ssl", 00:24:03.880 "recv_buf_size": 4096, 00:24:03.880 "send_buf_size": 4096, 00:24:03.880 "enable_recv_pipe": true, 00:24:03.880 "enable_quickack": false, 00:24:03.880 "enable_placement_id": 0, 00:24:03.880 "enable_zerocopy_send_server": true, 00:24:03.880 "enable_zerocopy_send_client": false, 00:24:03.880 "zerocopy_threshold": 0, 00:24:03.880 "tls_version": 0, 00:24:03.880 "enable_ktls": false 00:24:03.880 } 00:24:03.880 }, 00:24:03.880 { 00:24:03.880 "method": "sock_impl_set_options", 00:24:03.880 "params": { 00:24:03.880 "impl_name": "posix", 00:24:03.880 "recv_buf_size": 2097152, 00:24:03.880 "send_buf_size": 2097152, 00:24:03.880 "enable_recv_pipe": true, 00:24:03.880 "enable_quickack": false, 00:24:03.880 "enable_placement_id": 0, 00:24:03.880 "enable_zerocopy_send_server": true, 00:24:03.880 "enable_zerocopy_send_client": false, 00:24:03.880 "zerocopy_threshold": 0, 00:24:03.880 "tls_version": 0, 00:24:03.880 "enable_ktls": false 00:24:03.880 } 00:24:03.880 } 00:24:03.880 ] 00:24:03.880 }, 00:24:03.880 { 00:24:03.880 "subsystem": "vmd", 00:24:03.880 "config": [] 00:24:03.880 }, 00:24:03.880 { 00:24:03.880 "subsystem": "accel", 00:24:03.880 "config": [ 00:24:03.880 { 00:24:03.880 "method": "accel_set_options", 00:24:03.880 "params": { 00:24:03.880 "small_cache_size": 128, 00:24:03.880 "large_cache_size": 16, 00:24:03.880 "task_count": 2048, 00:24:03.880 "sequence_count": 2048, 00:24:03.880 "buf_count": 2048 00:24:03.880 } 00:24:03.880 } 00:24:03.880 ] 00:24:03.880 }, 00:24:03.880 { 00:24:03.880 "subsystem": "bdev", 00:24:03.880 "config": [ 00:24:03.880 { 00:24:03.880 "method": "bdev_set_options", 00:24:03.880 "params": { 00:24:03.880 "bdev_io_pool_size": 65535, 00:24:03.880 "bdev_io_cache_size": 256, 00:24:03.880 "bdev_auto_examine": true, 00:24:03.880 "iobuf_small_cache_size": 128, 00:24:03.880 "iobuf_large_cache_size": 16 00:24:03.880 } 00:24:03.880 }, 00:24:03.880 { 00:24:03.880 "method": "bdev_raid_set_options", 00:24:03.880 "params": { 00:24:03.880 "process_window_size_kb": 1024 00:24:03.880 } 00:24:03.880 }, 00:24:03.880 { 00:24:03.880 "method": "bdev_iscsi_set_options", 00:24:03.880 "params": { 00:24:03.880 "timeout_sec": 30 00:24:03.880 } 00:24:03.880 }, 00:24:03.880 { 00:24:03.880 "method": "bdev_nvme_set_options", 00:24:03.880 "params": { 00:24:03.880 "action_on_timeout": "none", 00:24:03.880 "timeout_us": 0, 00:24:03.880 "timeout_admin_us": 0, 00:24:03.880 "keep_alive_timeout_ms": 10000, 00:24:03.880 "arbitration_burst": 0, 00:24:03.880 "low_priority_weight": 0, 00:24:03.880 "medium_priority_weight": 0, 00:24:03.880 "high_priority_weight": 0, 00:24:03.880 "nvme_adminq_poll_period_us": 10000, 00:24:03.880 "nvme_ioq_poll_period_us": 0, 00:24:03.880 "io_queue_requests": 512, 00:24:03.880 "delay_cmd_submit": true, 00:24:03.880 "transport_retry_count": 4, 00:24:03.880 "bdev_retry_count": 3, 00:24:03.880 "transport_ack_timeout": 0, 00:24:03.880 "ctrlr_loss_timeout_sec": 0, 00:24:03.880 "reconnect_delay_sec": 0, 00:24:03.880 "fast_io_fail_timeout_sec": 0, 00:24:03.880 "disable_auto_failback": false, 00:24:03.880 "generate_uuids": false, 00:24:03.880 "transport_tos": 0, 00:24:03.880 "nvme_error_stat": false, 00:24:03.880 "rdma_srq_size": 0, 00:24:03.880 "io_path_stat": false, 00:24:03.880 "allow_accel_sequence": false, 00:24:03.880 "rdma_max_cq_size": 0, 00:24:03.880 "rdma_cm_event_timeout_ms": 0, 00:24:03.880 "dhchap_digests": [ 00:24:03.880 "sha256", 00:24:03.880 "sha384", 00:24:03.880 "sha512" 00:24:03.880 ], 00:24:03.880 "dhchap_dhgroups": [ 00:24:03.880 "null", 00:24:03.880 "ffdhe2048", 00:24:03.880 "ffdhe3072", 00:24:03.880 "ffdhe4096", 00:24:03.880 "ffdhe6144", 00:24:03.880 "ffdhe8192" 00:24:03.880 ] 00:24:03.880 } 00:24:03.880 }, 00:24:03.880 { 00:24:03.880 "method": "bdev_nvme_attach_controller", 00:24:03.880 "params": { 00:24:03.880 "name": "nvme0", 00:24:03.880 "trtype": "TCP", 00:24:03.880 "adrfam": "IPv4", 00:24:03.880 "traddr": "10.0.0.2", 00:24:03.880 "trsvcid": "4420", 00:24:03.880 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.880 "prchk_reftag": false, 00:24:03.880 "prchk_guard": false, 00:24:03.880 "ctrlr_loss_timeout_sec": 0, 00:24:03.880 "reconnect_delay_sec": 0, 00:24:03.880 "fast_io_fail_timeout_sec": 0, 00:24:03.880 "psk": "key0", 00:24:03.880 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:03.880 "hdgst": false, 00:24:03.880 "ddgst": false 00:24:03.880 } 00:24:03.880 }, 00:24:03.880 { 00:24:03.880 "method": "bdev_nvme_set_hotplug", 00:24:03.880 "params": { 00:24:03.880 "period_us": 100000, 00:24:03.880 "enable": false 00:24:03.881 } 00:24:03.881 }, 00:24:03.881 { 00:24:03.881 "method": "bdev_enable_histogram", 00:24:03.881 "params": { 00:24:03.881 "name": "nvme0n1", 00:24:03.881 "enable": true 00:24:03.881 } 00:24:03.881 }, 00:24:03.881 { 00:24:03.881 "method": "bdev_wait_for_examine" 00:24:03.881 } 00:24:03.881 ] 00:24:03.881 }, 00:24:03.881 { 00:24:03.881 "subsystem": "nbd", 00:24:03.881 "config": [] 00:24:03.881 } 00:24:03.881 ] 00:24:03.881 }' 00:24:03.881 09:33:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:03.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:03.881 09:33:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:03.881 09:33:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:03.881 [2024-07-14 09:33:48.216412] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:24:03.881 [2024-07-14 09:33:48.216492] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid785346 ] 00:24:03.881 EAL: No free 2048 kB hugepages reported on node 1 00:24:03.881 [2024-07-14 09:33:48.278130] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.138 [2024-07-14 09:33:48.370922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:04.138 [2024-07-14 09:33:48.554079] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:05.077 09:33:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:05.077 09:33:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:05.077 09:33:49 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:05.077 09:33:49 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:24:05.078 09:33:49 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:05.078 09:33:49 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:05.348 Running I/O for 1 seconds... 00:24:06.279 00:24:06.279 Latency(us) 00:24:06.279 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.279 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:06.279 Verification LBA range: start 0x0 length 0x2000 00:24:06.279 nvme0n1 : 1.09 1447.58 5.65 0.00 0.00 85773.38 6747.78 137479.96 00:24:06.279 =================================================================================================================== 00:24:06.279 Total : 1447.58 5.65 0.00 0.00 85773.38 6747.78 137479.96 00:24:06.279 0 00:24:06.279 09:33:50 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:24:06.279 09:33:50 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:24:06.279 09:33:50 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:06.279 09:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:24:06.279 09:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:24:06.279 09:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:24:06.279 09:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:06.279 09:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:24:06.279 09:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:24:06.279 09:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:24:06.279 09:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:06.279 nvmf_trace.0 00:24:06.537 09:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:24:06.537 09:33:50 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 785346 00:24:06.537 09:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 785346 ']' 00:24:06.537 09:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 785346 00:24:06.537 09:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:06.537 09:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:06.537 09:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 785346 00:24:06.537 09:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:06.537 09:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:06.537 09:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 785346' 00:24:06.537 killing process with pid 785346 00:24:06.537 09:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 785346 00:24:06.537 Received shutdown signal, test time was about 1.000000 seconds 00:24:06.537 00:24:06.537 Latency(us) 00:24:06.537 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.537 =================================================================================================================== 00:24:06.537 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:06.537 09:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 785346 00:24:06.795 09:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:06.795 09:33:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:06.795 09:33:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:24:06.795 09:33:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:06.795 09:33:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:24:06.795 09:33:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:06.795 09:33:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:06.795 rmmod nvme_tcp 00:24:06.795 rmmod nvme_fabrics 00:24:06.795 rmmod nvme_keyring 00:24:06.795 09:33:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:06.795 09:33:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:24:06.795 09:33:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:24:06.795 09:33:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 785197 ']' 00:24:06.795 09:33:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 785197 00:24:06.795 09:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 785197 ']' 00:24:06.795 09:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 785197 00:24:06.795 09:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:06.795 09:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:06.795 09:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 785197 00:24:06.795 09:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:06.795 09:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:06.795 09:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 785197' 00:24:06.795 killing process with pid 785197 00:24:06.795 09:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 785197 00:24:06.795 09:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 785197 00:24:07.053 09:33:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:07.053 09:33:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:07.053 09:33:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:07.053 09:33:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:07.053 09:33:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:07.053 09:33:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:07.053 09:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:07.053 09:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.952 09:33:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:08.952 09:33:53 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.bE6h8Ht9ZF /tmp/tmp.61orDEHT65 /tmp/tmp.n1promfwBG 00:24:08.952 00:24:08.952 real 1m19.388s 00:24:08.952 user 2m6.214s 00:24:08.952 sys 0m28.640s 00:24:08.952 09:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:08.952 09:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:08.952 ************************************ 00:24:08.952 END TEST nvmf_tls 00:24:08.952 ************************************ 00:24:08.952 09:33:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:08.952 09:33:53 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:08.952 09:33:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:08.952 09:33:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:08.952 09:33:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:09.210 ************************************ 00:24:09.210 START TEST nvmf_fips 00:24:09.210 ************************************ 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:09.210 * Looking for test storage... 00:24:09.210 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:09.210 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:24:09.211 Error setting digest 00:24:09.211 00020480797F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:24:09.211 00020480797F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:24:09.211 09:33:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:11.118 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:11.118 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:11.118 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:11.118 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:11.118 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:11.377 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:11.377 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:11.377 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:11.377 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:11.377 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:11.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:11.377 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:24:11.377 00:24:11.377 --- 10.0.0.2 ping statistics --- 00:24:11.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:11.377 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:24:11.377 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:11.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:11.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:24:11.377 00:24:11.377 --- 10.0.0.1 ping statistics --- 00:24:11.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:11.377 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:24:11.377 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:11.377 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:24:11.377 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:11.377 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:11.377 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:11.377 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:11.377 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:11.377 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:11.377 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:11.377 09:33:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:24:11.377 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:11.377 09:33:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:11.377 09:33:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:11.377 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=787586 00:24:11.377 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:11.377 09:33:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 787586 00:24:11.377 09:33:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 787586 ']' 00:24:11.377 09:33:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:11.377 09:33:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:11.377 09:33:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:11.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:11.377 09:33:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:11.377 09:33:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:11.377 [2024-07-14 09:33:55.739692] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:24:11.377 [2024-07-14 09:33:55.739789] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:11.377 EAL: No free 2048 kB hugepages reported on node 1 00:24:11.377 [2024-07-14 09:33:55.809242] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.635 [2024-07-14 09:33:55.900995] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:11.635 [2024-07-14 09:33:55.901060] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:11.635 [2024-07-14 09:33:55.901088] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:11.635 [2024-07-14 09:33:55.901101] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:11.635 [2024-07-14 09:33:55.901114] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:11.635 [2024-07-14 09:33:55.901144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:12.568 09:33:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:12.568 09:33:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:24:12.568 09:33:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:12.568 09:33:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:12.568 09:33:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:12.568 09:33:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:12.568 09:33:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:24:12.568 09:33:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:12.568 09:33:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:12.568 09:33:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:12.568 09:33:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:12.568 09:33:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:12.568 09:33:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:12.568 09:33:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:12.568 [2024-07-14 09:33:56.957634] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:12.568 [2024-07-14 09:33:56.973605] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:12.568 [2024-07-14 09:33:56.973807] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:12.568 [2024-07-14 09:33:57.005743] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:12.568 malloc0 00:24:12.827 09:33:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:12.827 09:33:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=787863 00:24:12.827 09:33:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:12.827 09:33:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 787863 /var/tmp/bdevperf.sock 00:24:12.827 09:33:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 787863 ']' 00:24:12.827 09:33:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:12.827 09:33:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:12.827 09:33:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:12.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:12.827 09:33:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:12.827 09:33:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:12.827 [2024-07-14 09:33:57.099307] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:24:12.827 [2024-07-14 09:33:57.099403] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid787863 ] 00:24:12.827 EAL: No free 2048 kB hugepages reported on node 1 00:24:12.827 [2024-07-14 09:33:57.157689] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.827 [2024-07-14 09:33:57.243574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:13.085 09:33:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:13.085 09:33:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:24:13.085 09:33:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:13.342 [2024-07-14 09:33:57.582819] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:13.342 [2024-07-14 09:33:57.582977] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:13.342 TLSTESTn1 00:24:13.342 09:33:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:13.342 Running I/O for 10 seconds... 00:24:25.534 00:24:25.534 Latency(us) 00:24:25.534 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.534 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:25.534 Verification LBA range: start 0x0 length 0x2000 00:24:25.534 TLSTESTn1 : 10.11 1263.87 4.94 0.00 0.00 100815.56 12718.84 125829.12 00:24:25.534 =================================================================================================================== 00:24:25.534 Total : 1263.87 4.94 0.00 0.00 100815.56 12718.84 125829.12 00:24:25.534 0 00:24:25.534 09:34:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:25.534 09:34:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:25.534 09:34:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:24:25.534 09:34:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:24:25.534 09:34:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:24:25.534 09:34:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:25.534 09:34:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:24:25.534 09:34:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:24:25.534 09:34:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:24:25.534 09:34:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:25.534 nvmf_trace.0 00:24:25.534 09:34:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:24:25.535 09:34:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 787863 00:24:25.535 09:34:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 787863 ']' 00:24:25.535 09:34:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 787863 00:24:25.535 09:34:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:24:25.535 09:34:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:25.535 09:34:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 787863 00:24:25.535 09:34:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:25.535 09:34:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:25.535 09:34:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 787863' 00:24:25.535 killing process with pid 787863 00:24:25.535 09:34:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 787863 00:24:25.535 Received shutdown signal, test time was about 10.000000 seconds 00:24:25.535 00:24:25.535 Latency(us) 00:24:25.535 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.535 =================================================================================================================== 00:24:25.535 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:25.535 [2024-07-14 09:34:08.047743] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:25.535 09:34:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 787863 00:24:25.535 09:34:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:25.535 09:34:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:25.535 09:34:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:24:25.535 09:34:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:25.535 09:34:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:24:25.535 09:34:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:25.535 09:34:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:25.535 rmmod nvme_tcp 00:24:25.535 rmmod nvme_fabrics 00:24:25.535 rmmod nvme_keyring 00:24:25.535 09:34:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:25.535 09:34:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:24:25.535 09:34:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:24:25.535 09:34:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 787586 ']' 00:24:25.535 09:34:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 787586 00:24:25.535 09:34:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 787586 ']' 00:24:25.535 09:34:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 787586 00:24:25.535 09:34:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:24:25.535 09:34:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:25.535 09:34:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 787586 00:24:25.535 09:34:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:25.535 09:34:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:25.535 09:34:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 787586' 00:24:25.535 killing process with pid 787586 00:24:25.535 09:34:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 787586 00:24:25.535 [2024-07-14 09:34:08.366993] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:25.535 09:34:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 787586 00:24:25.535 09:34:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:25.535 09:34:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:25.535 09:34:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:25.535 09:34:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:25.535 09:34:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:25.535 09:34:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.535 09:34:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:25.535 09:34:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.467 09:34:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:26.467 09:34:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:26.467 00:24:26.467 real 0m17.229s 00:24:26.467 user 0m21.414s 00:24:26.467 sys 0m6.380s 00:24:26.467 09:34:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:26.467 09:34:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:26.467 ************************************ 00:24:26.467 END TEST nvmf_fips 00:24:26.467 ************************************ 00:24:26.467 09:34:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:26.467 09:34:10 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:24:26.467 09:34:10 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:26.467 09:34:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:26.467 09:34:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:26.467 09:34:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:26.467 ************************************ 00:24:26.467 START TEST nvmf_fuzz 00:24:26.467 ************************************ 00:24:26.467 09:34:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:26.467 * Looking for test storage... 00:24:26.467 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:26.467 09:34:10 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:26.467 09:34:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:26.467 09:34:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:26.467 09:34:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:26.467 09:34:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:26.467 09:34:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:26.467 09:34:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:26.467 09:34:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:26.467 09:34:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:26.467 09:34:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:26.467 09:34:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:26.467 09:34:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:26.467 09:34:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:26.467 09:34:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:26.467 09:34:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:26.467 09:34:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:26.467 09:34:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:26.467 09:34:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:26.467 09:34:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:26.467 09:34:10 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:26.467 09:34:10 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:26.467 09:34:10 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:26.467 09:34:10 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.468 09:34:10 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.468 09:34:10 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.468 09:34:10 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:26.468 09:34:10 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.468 09:34:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:24:26.468 09:34:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:26.468 09:34:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:26.468 09:34:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:26.468 09:34:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:26.468 09:34:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:26.468 09:34:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:26.468 09:34:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:26.468 09:34:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:26.468 09:34:10 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:26.468 09:34:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:26.468 09:34:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:26.468 09:34:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:26.468 09:34:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:26.468 09:34:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:26.468 09:34:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.468 09:34:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:26.468 09:34:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.468 09:34:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:26.468 09:34:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:26.468 09:34:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:24:26.468 09:34:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:28.414 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:28.414 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:24:28.414 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:28.414 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:28.414 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:28.414 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:28.414 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:28.414 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:28.415 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:28.415 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:28.415 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:28.415 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:28.415 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:28.674 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:28.674 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:28.674 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:24:28.674 00:24:28.674 --- 10.0.0.2 ping statistics --- 00:24:28.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.675 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:24:28.675 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:28.675 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:28.675 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:24:28.675 00:24:28.675 --- 10.0.0.1 ping statistics --- 00:24:28.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.675 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:24:28.675 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:28.675 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:24:28.675 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:28.675 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:28.675 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:28.675 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:28.675 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:28.675 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:28.675 09:34:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:28.675 09:34:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=790986 00:24:28.675 09:34:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:28.675 09:34:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:28.675 09:34:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 790986 00:24:28.675 09:34:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 790986 ']' 00:24:28.675 09:34:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.675 09:34:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:28.675 09:34:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:28.675 09:34:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:28.675 09:34:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:28.933 09:34:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:28.933 09:34:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 00:24:28.933 09:34:13 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:28.933 09:34:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.933 09:34:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:28.933 09:34:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.933 09:34:13 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:28.933 09:34:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.933 09:34:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:28.933 Malloc0 00:24:28.933 09:34:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.933 09:34:13 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:28.933 09:34:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.933 09:34:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:28.933 09:34:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.933 09:34:13 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:28.933 09:34:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.933 09:34:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:28.933 09:34:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.933 09:34:13 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:28.933 09:34:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.933 09:34:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:28.933 09:34:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.934 09:34:13 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:28.934 09:34:13 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:01.004 Fuzzing completed. Shutting down the fuzz application 00:25:01.004 00:25:01.004 Dumping successful admin opcodes: 00:25:01.004 8, 9, 10, 24, 00:25:01.004 Dumping successful io opcodes: 00:25:01.004 0, 9, 00:25:01.004 NS: 0x200003aeff00 I/O qp, Total commands completed: 451555, total successful commands: 2628, random_seed: 2817158400 00:25:01.004 NS: 0x200003aeff00 admin qp, Total commands completed: 56256, total successful commands: 447, random_seed: 945002560 00:25:01.004 09:34:43 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:01.004 Fuzzing completed. Shutting down the fuzz application 00:25:01.004 00:25:01.004 Dumping successful admin opcodes: 00:25:01.004 24, 00:25:01.004 Dumping successful io opcodes: 00:25:01.004 00:25:01.004 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1351805620 00:25:01.004 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1351938528 00:25:01.004 09:34:45 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:01.004 09:34:45 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.004 09:34:45 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:01.004 09:34:45 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.005 09:34:45 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:01.005 09:34:45 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:01.005 09:34:45 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:01.005 09:34:45 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:25:01.005 09:34:45 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:01.005 09:34:45 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:25:01.005 09:34:45 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:01.005 09:34:45 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:01.005 rmmod nvme_tcp 00:25:01.005 rmmod nvme_fabrics 00:25:01.005 rmmod nvme_keyring 00:25:01.005 09:34:45 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:01.005 09:34:45 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:25:01.005 09:34:45 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:25:01.005 09:34:45 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 790986 ']' 00:25:01.005 09:34:45 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 790986 00:25:01.005 09:34:45 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 790986 ']' 00:25:01.005 09:34:45 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 790986 00:25:01.005 09:34:45 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 00:25:01.005 09:34:45 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:01.005 09:34:45 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 790986 00:25:01.005 09:34:45 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:01.005 09:34:45 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:01.005 09:34:45 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 790986' 00:25:01.005 killing process with pid 790986 00:25:01.005 09:34:45 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 790986 00:25:01.005 09:34:45 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 790986 00:25:01.263 09:34:45 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:01.263 09:34:45 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:01.263 09:34:45 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:01.263 09:34:45 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:01.263 09:34:45 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:01.263 09:34:45 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:01.263 09:34:45 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:01.263 09:34:45 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.793 09:34:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:03.793 09:34:47 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:03.793 00:25:03.793 real 0m36.971s 00:25:03.793 user 0m51.005s 00:25:03.793 sys 0m15.214s 00:25:03.793 09:34:47 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:03.793 09:34:47 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:03.793 ************************************ 00:25:03.793 END TEST nvmf_fuzz 00:25:03.793 ************************************ 00:25:03.793 09:34:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:03.793 09:34:47 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:03.793 09:34:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:03.793 09:34:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:03.793 09:34:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:03.793 ************************************ 00:25:03.793 START TEST nvmf_multiconnection 00:25:03.793 ************************************ 00:25:03.793 09:34:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:03.793 * Looking for test storage... 00:25:03.793 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:03.793 09:34:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:03.793 09:34:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:03.793 09:34:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:03.793 09:34:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:03.793 09:34:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:03.793 09:34:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:03.793 09:34:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:03.793 09:34:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:03.793 09:34:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:03.793 09:34:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:03.793 09:34:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:03.793 09:34:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:03.793 09:34:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:03.793 09:34:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:03.793 09:34:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:03.793 09:34:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:03.793 09:34:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:03.793 09:34:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:03.793 09:34:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:03.793 09:34:47 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:03.793 09:34:47 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:03.793 09:34:47 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:03.793 09:34:47 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.793 09:34:47 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.793 09:34:47 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.793 09:34:47 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:03.793 09:34:47 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.793 09:34:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:25:03.793 09:34:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:03.793 09:34:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:03.793 09:34:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:03.793 09:34:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:03.793 09:34:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:03.793 09:34:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:03.793 09:34:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:03.793 09:34:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:03.793 09:34:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:03.794 09:34:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:03.794 09:34:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:03.794 09:34:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:03.794 09:34:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:03.794 09:34:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:03.794 09:34:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:03.794 09:34:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:03.794 09:34:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:03.794 09:34:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.794 09:34:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:03.794 09:34:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.794 09:34:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:03.794 09:34:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:03.794 09:34:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:25:03.794 09:34:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:05.696 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:05.696 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:25:05.696 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:05.696 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:05.696 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:05.696 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:05.696 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:05.696 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:25:05.696 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:05.696 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:25:05.696 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:25:05.696 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:25:05.696 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:25:05.696 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:25:05.696 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:25:05.696 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:05.696 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:05.696 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:05.696 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:05.696 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:05.696 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:05.696 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:05.696 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:05.696 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:05.696 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:05.696 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:05.696 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:05.696 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:05.696 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:05.696 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:05.696 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:05.696 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:05.696 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:05.696 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:05.696 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:05.696 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:05.696 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:05.696 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.696 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.696 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:05.696 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:05.697 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:05.697 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:05.697 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:05.697 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:05.697 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:25:05.697 00:25:05.697 --- 10.0.0.2 ping statistics --- 00:25:05.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.697 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:05.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:05.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:25:05.697 00:25:05.697 --- 10.0.0.1 ping statistics --- 00:25:05.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.697 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=796701 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 796701 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 796701 ']' 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:05.697 09:34:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:05.697 [2024-07-14 09:34:49.929460] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:25:05.697 [2024-07-14 09:34:49.929560] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:05.697 EAL: No free 2048 kB hugepages reported on node 1 00:25:05.697 [2024-07-14 09:34:49.998382] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:05.697 [2024-07-14 09:34:50.098987] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:05.697 [2024-07-14 09:34:50.099047] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:05.697 [2024-07-14 09:34:50.099071] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:05.697 [2024-07-14 09:34:50.099081] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:05.697 [2024-07-14 09:34:50.099091] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:05.697 [2024-07-14 09:34:50.099201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:05.697 [2024-07-14 09:34:50.099268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:05.697 [2024-07-14 09:34:50.099362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:05.697 [2024-07-14 09:34:50.099364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:05.956 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:05.956 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:25:05.956 09:34:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:05.956 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:05.956 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:05.956 09:34:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:05.956 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:05.956 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.956 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:05.956 [2024-07-14 09:34:50.242496] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:05.956 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.956 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:05.956 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:05.956 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:05.956 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.956 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:05.956 Malloc1 00:25:05.956 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.956 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:05.956 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.956 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:05.956 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.956 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:05.956 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.956 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:05.956 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.956 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:05.956 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.956 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:05.956 [2024-07-14 09:34:50.297412] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:05.956 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.956 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:05.956 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:05.956 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.956 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:05.956 Malloc2 00:25:05.956 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.956 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:05.956 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.956 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:05.956 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.956 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:05.956 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.957 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:05.957 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.957 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:05.957 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.957 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:05.957 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.957 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:05.957 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:05.957 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.957 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:05.957 Malloc3 00:25:05.957 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.957 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:05.957 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.957 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:05.957 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.957 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:05.957 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.957 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:05.957 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.957 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:05.957 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.957 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:05.957 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.957 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:05.957 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:05.957 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.957 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.214 Malloc4 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.214 Malloc5 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.214 Malloc6 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.214 Malloc7 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.214 Malloc8 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.214 Malloc9 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.214 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.472 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.472 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:06.472 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.472 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.472 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.472 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:06.472 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.472 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.472 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.472 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:06.472 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:06.472 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.472 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.472 Malloc10 00:25:06.472 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.472 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:06.472 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.472 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.472 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.472 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:06.472 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.472 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.472 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.472 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:06.472 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.472 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.472 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.472 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:06.472 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:06.472 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.472 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.472 Malloc11 00:25:06.472 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.472 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:06.472 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.472 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.472 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.472 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:06.472 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.472 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.472 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.472 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:06.472 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.472 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:06.472 09:34:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.472 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:06.472 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:06.472 09:34:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:07.036 09:34:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:07.036 09:34:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:07.036 09:34:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:07.036 09:34:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:07.036 09:34:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:09.608 09:34:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:09.608 09:34:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:09.608 09:34:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:25:09.608 09:34:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:09.608 09:34:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:09.608 09:34:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:09.608 09:34:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:09.608 09:34:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:09.865 09:34:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:09.865 09:34:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:09.865 09:34:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:09.865 09:34:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:09.865 09:34:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:11.759 09:34:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:11.759 09:34:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:11.759 09:34:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:25:11.759 09:34:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:11.759 09:34:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:11.759 09:34:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:11.759 09:34:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:11.759 09:34:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:12.325 09:34:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:12.325 09:34:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:12.325 09:34:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:12.325 09:34:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:12.325 09:34:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:14.853 09:34:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:14.853 09:34:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:14.853 09:34:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:25:14.853 09:34:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:14.853 09:34:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:14.853 09:34:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:14.853 09:34:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:14.854 09:34:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:15.112 09:34:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:15.112 09:34:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:15.112 09:34:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:15.112 09:34:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:15.112 09:34:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:17.639 09:35:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:17.639 09:35:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:17.639 09:35:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:25:17.639 09:35:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:17.639 09:35:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:17.639 09:35:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:17.639 09:35:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:17.639 09:35:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:17.897 09:35:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:17.897 09:35:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:17.897 09:35:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:17.897 09:35:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:17.897 09:35:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:20.420 09:35:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:20.420 09:35:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:20.420 09:35:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:25:20.420 09:35:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:20.420 09:35:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:20.420 09:35:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:20.420 09:35:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:20.420 09:35:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:20.679 09:35:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:20.679 09:35:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:20.679 09:35:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:20.679 09:35:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:20.679 09:35:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:23.204 09:35:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:23.204 09:35:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:23.204 09:35:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:25:23.204 09:35:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:23.204 09:35:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:23.204 09:35:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:23.204 09:35:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:23.204 09:35:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:23.462 09:35:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:23.462 09:35:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:23.462 09:35:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:23.462 09:35:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:23.462 09:35:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:25.989 09:35:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:25.989 09:35:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:25.989 09:35:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:25:25.989 09:35:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:25.989 09:35:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:25.989 09:35:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:25.989 09:35:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:25.989 09:35:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:26.247 09:35:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:26.247 09:35:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:26.247 09:35:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:26.247 09:35:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:26.247 09:35:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:28.166 09:35:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:28.166 09:35:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:28.166 09:35:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:25:28.166 09:35:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:28.166 09:35:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:28.166 09:35:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:28.166 09:35:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:28.166 09:35:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:29.098 09:35:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:29.098 09:35:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:29.098 09:35:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:29.098 09:35:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:29.098 09:35:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:31.622 09:35:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:31.622 09:35:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:31.622 09:35:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:25:31.622 09:35:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:31.622 09:35:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:31.622 09:35:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:31.622 09:35:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:31.622 09:35:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:31.880 09:35:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:31.880 09:35:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:31.880 09:35:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:31.880 09:35:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:31.880 09:35:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:34.402 09:35:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:34.402 09:35:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:34.402 09:35:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:25:34.402 09:35:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:34.402 09:35:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:34.402 09:35:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:34.402 09:35:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:34.402 09:35:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:34.966 09:35:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:34.966 09:35:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:34.966 09:35:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:34.966 09:35:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:34.966 09:35:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:36.863 09:35:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:36.863 09:35:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:36.863 09:35:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:25:36.863 09:35:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:36.863 09:35:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:36.863 09:35:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:36.863 09:35:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:36.863 [global] 00:25:36.863 thread=1 00:25:36.863 invalidate=1 00:25:36.863 rw=read 00:25:36.863 time_based=1 00:25:36.863 runtime=10 00:25:36.863 ioengine=libaio 00:25:36.863 direct=1 00:25:36.863 bs=262144 00:25:36.863 iodepth=64 00:25:36.863 norandommap=1 00:25:36.863 numjobs=1 00:25:36.863 00:25:36.863 [job0] 00:25:36.863 filename=/dev/nvme0n1 00:25:36.863 [job1] 00:25:36.863 filename=/dev/nvme10n1 00:25:36.863 [job2] 00:25:36.863 filename=/dev/nvme1n1 00:25:36.863 [job3] 00:25:36.863 filename=/dev/nvme2n1 00:25:36.863 [job4] 00:25:36.863 filename=/dev/nvme3n1 00:25:36.863 [job5] 00:25:36.863 filename=/dev/nvme4n1 00:25:36.863 [job6] 00:25:36.863 filename=/dev/nvme5n1 00:25:36.863 [job7] 00:25:36.863 filename=/dev/nvme6n1 00:25:36.863 [job8] 00:25:36.863 filename=/dev/nvme7n1 00:25:36.863 [job9] 00:25:36.863 filename=/dev/nvme8n1 00:25:36.863 [job10] 00:25:36.863 filename=/dev/nvme9n1 00:25:37.121 Could not set queue depth (nvme0n1) 00:25:37.121 Could not set queue depth (nvme10n1) 00:25:37.121 Could not set queue depth (nvme1n1) 00:25:37.121 Could not set queue depth (nvme2n1) 00:25:37.121 Could not set queue depth (nvme3n1) 00:25:37.121 Could not set queue depth (nvme4n1) 00:25:37.121 Could not set queue depth (nvme5n1) 00:25:37.121 Could not set queue depth (nvme6n1) 00:25:37.121 Could not set queue depth (nvme7n1) 00:25:37.121 Could not set queue depth (nvme8n1) 00:25:37.121 Could not set queue depth (nvme9n1) 00:25:37.121 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:37.121 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:37.121 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:37.121 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:37.121 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:37.121 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:37.121 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:37.121 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:37.121 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:37.121 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:37.121 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:37.121 fio-3.35 00:25:37.121 Starting 11 threads 00:25:49.334 00:25:49.334 job0: (groupid=0, jobs=1): err= 0: pid=800950: Sun Jul 14 09:35:32 2024 00:25:49.334 read: IOPS=644, BW=161MiB/s (169MB/s)(1614MiB/10021msec) 00:25:49.334 slat (usec): min=10, max=67072, avg=1127.00, stdev=3951.87 00:25:49.334 clat (msec): min=3, max=277, avg=98.17, stdev=45.85 00:25:49.334 lat (msec): min=3, max=292, avg=99.29, stdev=46.41 00:25:49.334 clat percentiles (msec): 00:25:49.334 | 1.00th=[ 12], 5.00th=[ 33], 10.00th=[ 42], 20.00th=[ 54], 00:25:49.334 | 30.00th=[ 71], 40.00th=[ 86], 50.00th=[ 95], 60.00th=[ 105], 00:25:49.334 | 70.00th=[ 118], 80.00th=[ 140], 90.00th=[ 165], 95.00th=[ 178], 00:25:49.334 | 99.00th=[ 199], 99.50th=[ 220], 99.90th=[ 266], 99.95th=[ 266], 00:25:49.334 | 99.99th=[ 279] 00:25:49.334 bw ( KiB/s): min=96768, max=351744, per=9.47%, avg=163640.40, stdev=60564.61, samples=20 00:25:49.334 iops : min= 378, max= 1374, avg=639.20, stdev=236.59, samples=20 00:25:49.334 lat (msec) : 4=0.02%, 10=0.73%, 20=1.36%, 50=13.62%, 100=40.06% 00:25:49.334 lat (msec) : 250=43.80%, 500=0.42% 00:25:49.334 cpu : usr=0.31%, sys=2.08%, ctx=1660, majf=0, minf=4097 00:25:49.334 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:25:49.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:49.334 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:49.334 issued rwts: total=6455,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:49.334 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:49.334 job1: (groupid=0, jobs=1): err= 0: pid=800951: Sun Jul 14 09:35:32 2024 00:25:49.334 read: IOPS=900, BW=225MiB/s (236MB/s)(2276MiB/10106msec) 00:25:49.334 slat (usec): min=9, max=143950, avg=819.96, stdev=4086.16 00:25:49.334 clat (usec): min=1118, max=310874, avg=70179.39, stdev=40769.58 00:25:49.334 lat (usec): min=1141, max=310912, avg=70999.35, stdev=41267.24 00:25:49.334 clat percentiles (msec): 00:25:49.334 | 1.00th=[ 8], 5.00th=[ 21], 10.00th=[ 39], 20.00th=[ 46], 00:25:49.334 | 30.00th=[ 48], 40.00th=[ 54], 50.00th=[ 60], 60.00th=[ 66], 00:25:49.334 | 70.00th=[ 75], 80.00th=[ 88], 90.00th=[ 124], 95.00th=[ 171], 00:25:49.334 | 99.00th=[ 207], 99.50th=[ 218], 99.90th=[ 236], 99.95th=[ 259], 00:25:49.334 | 99.99th=[ 313] 00:25:49.334 bw ( KiB/s): min=100864, max=347136, per=13.39%, avg=231373.65, stdev=66385.75, samples=20 00:25:49.334 iops : min= 394, max= 1356, avg=903.75, stdev=259.38, samples=20 00:25:49.334 lat (msec) : 2=0.02%, 4=0.20%, 10=1.78%, 20=2.94%, 50=29.89% 00:25:49.334 lat (msec) : 100=50.99%, 250=14.11%, 500=0.07% 00:25:49.334 cpu : usr=0.57%, sys=2.74%, ctx=2200, majf=0, minf=4097 00:25:49.334 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:25:49.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:49.334 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:49.334 issued rwts: total=9102,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:49.334 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:49.334 job2: (groupid=0, jobs=1): err= 0: pid=800953: Sun Jul 14 09:35:32 2024 00:25:49.334 read: IOPS=931, BW=233MiB/s (244MB/s)(2353MiB/10102msec) 00:25:49.334 slat (usec): min=9, max=146207, avg=700.50, stdev=3249.03 00:25:49.334 clat (usec): min=1663, max=376091, avg=67945.77, stdev=49776.86 00:25:49.334 lat (usec): min=1685, max=406702, avg=68646.27, stdev=50153.16 00:25:49.334 clat percentiles (msec): 00:25:49.334 | 1.00th=[ 6], 5.00th=[ 15], 10.00th=[ 23], 20.00th=[ 36], 00:25:49.334 | 30.00th=[ 41], 40.00th=[ 47], 50.00th=[ 52], 60.00th=[ 60], 00:25:49.334 | 70.00th=[ 74], 80.00th=[ 94], 90.00th=[ 140], 95.00th=[ 182], 00:25:49.334 | 99.00th=[ 228], 99.50th=[ 249], 99.90th=[ 376], 99.95th=[ 376], 00:25:49.334 | 99.99th=[ 376] 00:25:49.334 bw ( KiB/s): min=87040, max=388096, per=13.85%, avg=239283.50, stdev=85651.69, samples=20 00:25:49.334 iops : min= 340, max= 1516, avg=934.70, stdev=334.58, samples=20 00:25:49.335 lat (msec) : 2=0.04%, 4=0.44%, 10=2.21%, 20=6.16%, 50=38.21% 00:25:49.335 lat (msec) : 100=35.63%, 250=16.83%, 500=0.48% 00:25:49.335 cpu : usr=0.43%, sys=2.68%, ctx=2422, majf=0, minf=4097 00:25:49.335 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:25:49.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:49.335 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:49.335 issued rwts: total=9411,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:49.335 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:49.335 job3: (groupid=0, jobs=1): err= 0: pid=800954: Sun Jul 14 09:35:32 2024 00:25:49.335 read: IOPS=729, BW=182MiB/s (191MB/s)(1847MiB/10120msec) 00:25:49.335 slat (usec): min=9, max=231510, avg=861.27, stdev=5414.88 00:25:49.335 clat (msec): min=2, max=364, avg=86.77, stdev=62.22 00:25:49.335 lat (msec): min=2, max=536, avg=87.63, stdev=63.11 00:25:49.335 clat percentiles (msec): 00:25:49.335 | 1.00th=[ 7], 5.00th=[ 17], 10.00th=[ 26], 20.00th=[ 42], 00:25:49.335 | 30.00th=[ 50], 40.00th=[ 58], 50.00th=[ 69], 60.00th=[ 83], 00:25:49.335 | 70.00th=[ 97], 80.00th=[ 126], 90.00th=[ 184], 95.00th=[ 207], 00:25:49.335 | 99.00th=[ 309], 99.50th=[ 351], 99.90th=[ 355], 99.95th=[ 355], 00:25:49.335 | 99.99th=[ 363] 00:25:49.335 bw ( KiB/s): min=64000, max=365568, per=10.85%, avg=187417.85, stdev=78545.70, samples=20 00:25:49.335 iops : min= 250, max= 1428, avg=732.10, stdev=306.82, samples=20 00:25:49.335 lat (msec) : 4=0.16%, 10=1.83%, 20=4.81%, 50=23.44%, 100=41.63% 00:25:49.335 lat (msec) : 250=26.33%, 500=1.80% 00:25:49.335 cpu : usr=0.27%, sys=2.13%, ctx=2086, majf=0, minf=4097 00:25:49.335 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:25:49.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:49.335 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:49.335 issued rwts: total=7386,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:49.335 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:49.335 job4: (groupid=0, jobs=1): err= 0: pid=800955: Sun Jul 14 09:35:32 2024 00:25:49.335 read: IOPS=423, BW=106MiB/s (111MB/s)(1073MiB/10121msec) 00:25:49.335 slat (usec): min=9, max=130456, avg=1838.71, stdev=6696.34 00:25:49.335 clat (msec): min=2, max=372, avg=148.99, stdev=64.74 00:25:49.335 lat (msec): min=2, max=495, avg=150.83, stdev=65.85 00:25:49.335 clat percentiles (msec): 00:25:49.335 | 1.00th=[ 16], 5.00th=[ 30], 10.00th=[ 47], 20.00th=[ 93], 00:25:49.335 | 30.00th=[ 129], 40.00th=[ 146], 50.00th=[ 159], 60.00th=[ 169], 00:25:49.335 | 70.00th=[ 182], 80.00th=[ 197], 90.00th=[ 215], 95.00th=[ 247], 00:25:49.335 | 99.00th=[ 305], 99.50th=[ 334], 99.90th=[ 368], 99.95th=[ 368], 00:25:49.335 | 99.99th=[ 372] 00:25:49.335 bw ( KiB/s): min=69120, max=237568, per=6.27%, avg=108228.65, stdev=46062.07, samples=20 00:25:49.335 iops : min= 270, max= 928, avg=422.75, stdev=179.94, samples=20 00:25:49.335 lat (msec) : 4=0.02%, 10=0.35%, 20=1.70%, 50=8.81%, 100=10.53% 00:25:49.335 lat (msec) : 250=73.88%, 500=4.71% 00:25:49.335 cpu : usr=0.25%, sys=1.30%, ctx=1079, majf=0, minf=4097 00:25:49.335 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:25:49.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:49.335 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:49.335 issued rwts: total=4291,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:49.335 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:49.335 job5: (groupid=0, jobs=1): err= 0: pid=800957: Sun Jul 14 09:35:32 2024 00:25:49.335 read: IOPS=428, BW=107MiB/s (112MB/s)(1084MiB/10119msec) 00:25:49.335 slat (usec): min=11, max=184361, avg=2080.74, stdev=7172.78 00:25:49.335 clat (usec): min=1388, max=389624, avg=147181.39, stdev=67860.05 00:25:49.335 lat (usec): min=1417, max=423583, avg=149262.12, stdev=68931.78 00:25:49.335 clat percentiles (msec): 00:25:49.335 | 1.00th=[ 9], 5.00th=[ 19], 10.00th=[ 41], 20.00th=[ 93], 00:25:49.335 | 30.00th=[ 127], 40.00th=[ 140], 50.00th=[ 153], 60.00th=[ 165], 00:25:49.335 | 70.00th=[ 178], 80.00th=[ 194], 90.00th=[ 226], 95.00th=[ 257], 00:25:49.335 | 99.00th=[ 351], 99.50th=[ 363], 99.90th=[ 372], 99.95th=[ 376], 00:25:49.335 | 99.99th=[ 388] 00:25:49.335 bw ( KiB/s): min=46592, max=248320, per=6.33%, avg=109343.20, stdev=46442.69, samples=20 00:25:49.335 iops : min= 182, max= 970, avg=427.10, stdev=181.37, samples=20 00:25:49.335 lat (msec) : 2=0.02%, 4=0.28%, 10=1.50%, 20=3.62%, 50=5.35% 00:25:49.335 lat (msec) : 100=11.07%, 250=71.93%, 500=6.23% 00:25:49.335 cpu : usr=0.30%, sys=1.42%, ctx=1122, majf=0, minf=3722 00:25:49.335 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:25:49.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:49.335 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:49.335 issued rwts: total=4335,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:49.335 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:49.335 job6: (groupid=0, jobs=1): err= 0: pid=800958: Sun Jul 14 09:35:32 2024 00:25:49.335 read: IOPS=386, BW=96.7MiB/s (101MB/s)(979MiB/10121msec) 00:25:49.335 slat (usec): min=14, max=177970, avg=2401.42, stdev=8652.43 00:25:49.335 clat (usec): min=1370, max=486634, avg=162964.69, stdev=57367.77 00:25:49.335 lat (usec): min=1416, max=486718, avg=165366.11, stdev=58626.91 00:25:49.335 clat percentiles (msec): 00:25:49.335 | 1.00th=[ 4], 5.00th=[ 51], 10.00th=[ 110], 20.00th=[ 133], 00:25:49.335 | 30.00th=[ 144], 40.00th=[ 155], 50.00th=[ 165], 60.00th=[ 171], 00:25:49.335 | 70.00th=[ 182], 80.00th=[ 192], 90.00th=[ 218], 95.00th=[ 259], 00:25:49.335 | 99.00th=[ 355], 99.50th=[ 368], 99.90th=[ 401], 99.95th=[ 456], 00:25:49.335 | 99.99th=[ 489] 00:25:49.335 bw ( KiB/s): min=48128, max=172032, per=5.70%, avg=98550.35, stdev=26839.88, samples=20 00:25:49.335 iops : min= 188, max= 672, avg=384.95, stdev=104.84, samples=20 00:25:49.335 lat (msec) : 2=0.05%, 4=1.23%, 10=1.18%, 20=0.74%, 50=1.43% 00:25:49.335 lat (msec) : 100=2.96%, 250=85.49%, 500=6.92% 00:25:49.335 cpu : usr=0.29%, sys=1.33%, ctx=983, majf=0, minf=4097 00:25:49.335 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:25:49.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:49.335 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:49.335 issued rwts: total=3914,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:49.335 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:49.335 job7: (groupid=0, jobs=1): err= 0: pid=800959: Sun Jul 14 09:35:32 2024 00:25:49.335 read: IOPS=580, BW=145MiB/s (152MB/s)(1466MiB/10101msec) 00:25:49.335 slat (usec): min=10, max=147993, avg=1423.61, stdev=5947.09 00:25:49.335 clat (msec): min=2, max=338, avg=108.77, stdev=54.10 00:25:49.335 lat (msec): min=2, max=338, avg=110.19, stdev=54.92 00:25:49.335 clat percentiles (msec): 00:25:49.335 | 1.00th=[ 10], 5.00th=[ 26], 10.00th=[ 41], 20.00th=[ 51], 00:25:49.335 | 30.00th=[ 78], 40.00th=[ 96], 50.00th=[ 108], 60.00th=[ 124], 00:25:49.335 | 70.00th=[ 140], 80.00th=[ 157], 90.00th=[ 184], 95.00th=[ 199], 00:25:49.335 | 99.00th=[ 228], 99.50th=[ 243], 99.90th=[ 296], 99.95th=[ 300], 00:25:49.335 | 99.99th=[ 338] 00:25:49.335 bw ( KiB/s): min=64000, max=335360, per=8.59%, avg=148437.10, stdev=58362.38, samples=20 00:25:49.335 iops : min= 250, max= 1310, avg=579.80, stdev=227.96, samples=20 00:25:49.335 lat (msec) : 4=0.41%, 10=0.65%, 20=2.81%, 50=15.32%, 100=24.70% 00:25:49.335 lat (msec) : 250=55.94%, 500=0.17% 00:25:49.335 cpu : usr=0.34%, sys=1.98%, ctx=1469, majf=0, minf=4097 00:25:49.335 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:25:49.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:49.335 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:49.335 issued rwts: total=5862,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:49.335 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:49.335 job8: (groupid=0, jobs=1): err= 0: pid=800960: Sun Jul 14 09:35:32 2024 00:25:49.335 read: IOPS=744, BW=186MiB/s (195MB/s)(1883MiB/10120msec) 00:25:49.335 slat (usec): min=9, max=232448, avg=1091.65, stdev=4609.23 00:25:49.335 clat (msec): min=3, max=357, avg=84.84, stdev=56.01 00:25:49.335 lat (msec): min=3, max=570, avg=85.93, stdev=56.79 00:25:49.335 clat percentiles (msec): 00:25:49.335 | 1.00th=[ 7], 5.00th=[ 15], 10.00th=[ 37], 20.00th=[ 45], 00:25:49.335 | 30.00th=[ 53], 40.00th=[ 64], 50.00th=[ 74], 60.00th=[ 82], 00:25:49.335 | 70.00th=[ 90], 80.00th=[ 117], 90.00th=[ 161], 95.00th=[ 192], 00:25:49.335 | 99.00th=[ 309], 99.50th=[ 334], 99.90th=[ 351], 99.95th=[ 351], 00:25:49.335 | 99.99th=[ 359] 00:25:49.335 bw ( KiB/s): min=74752, max=356352, per=11.07%, avg=191164.75, stdev=73483.91, samples=20 00:25:49.335 iops : min= 292, max= 1392, avg=746.70, stdev=287.06, samples=20 00:25:49.335 lat (msec) : 4=0.01%, 10=2.47%, 20=4.12%, 50=21.15%, 100=47.42% 00:25:49.335 lat (msec) : 250=22.96%, 500=1.87% 00:25:49.335 cpu : usr=0.44%, sys=2.28%, ctx=1770, majf=0, minf=4097 00:25:49.335 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:49.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:49.335 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:49.335 issued rwts: total=7531,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:49.335 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:49.335 job9: (groupid=0, jobs=1): err= 0: pid=800961: Sun Jul 14 09:35:32 2024 00:25:49.335 read: IOPS=404, BW=101MiB/s (106MB/s)(1021MiB/10102msec) 00:25:49.335 slat (usec): min=10, max=174014, avg=2163.11, stdev=7142.94 00:25:49.335 clat (msec): min=12, max=452, avg=156.00, stdev=62.17 00:25:49.335 lat (msec): min=12, max=452, avg=158.17, stdev=62.98 00:25:49.335 clat percentiles (msec): 00:25:49.335 | 1.00th=[ 29], 5.00th=[ 58], 10.00th=[ 94], 20.00th=[ 115], 00:25:49.335 | 30.00th=[ 129], 40.00th=[ 142], 50.00th=[ 153], 60.00th=[ 163], 00:25:49.335 | 70.00th=[ 171], 80.00th=[ 186], 90.00th=[ 224], 95.00th=[ 279], 00:25:49.335 | 99.00th=[ 368], 99.50th=[ 376], 99.90th=[ 393], 99.95th=[ 447], 00:25:49.335 | 99.99th=[ 451] 00:25:49.335 bw ( KiB/s): min=54784, max=180224, per=5.96%, avg=102934.85, stdev=31196.52, samples=20 00:25:49.335 iops : min= 214, max= 704, avg=402.05, stdev=121.90, samples=20 00:25:49.335 lat (msec) : 20=0.56%, 50=3.21%, 100=9.04%, 250=80.85%, 500=6.34% 00:25:49.335 cpu : usr=0.26%, sys=1.37%, ctx=1041, majf=0, minf=4097 00:25:49.335 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:49.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:49.335 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:49.335 issued rwts: total=4084,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:49.335 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:49.335 job10: (groupid=0, jobs=1): err= 0: pid=800962: Sun Jul 14 09:35:32 2024 00:25:49.335 read: IOPS=591, BW=148MiB/s (155MB/s)(1481MiB/10018msec) 00:25:49.335 slat (usec): min=12, max=161485, avg=1483.26, stdev=6234.07 00:25:49.335 clat (msec): min=3, max=428, avg=106.67, stdev=69.77 00:25:49.336 lat (msec): min=3, max=428, avg=108.15, stdev=70.91 00:25:49.336 clat percentiles (msec): 00:25:49.336 | 1.00th=[ 10], 5.00th=[ 31], 10.00th=[ 40], 20.00th=[ 46], 00:25:49.336 | 30.00th=[ 55], 40.00th=[ 65], 50.00th=[ 81], 60.00th=[ 108], 00:25:49.336 | 70.00th=[ 150], 80.00th=[ 171], 90.00th=[ 201], 95.00th=[ 224], 00:25:49.336 | 99.00th=[ 326], 99.50th=[ 359], 99.90th=[ 380], 99.95th=[ 430], 00:25:49.336 | 99.99th=[ 430] 00:25:49.336 bw ( KiB/s): min=40960, max=322560, per=8.69%, avg=150031.70, stdev=81493.50, samples=20 00:25:49.336 iops : min= 160, max= 1260, avg=586.05, stdev=318.34, samples=20 00:25:49.336 lat (msec) : 4=0.02%, 10=1.11%, 20=1.89%, 50=21.34%, 100=32.83% 00:25:49.336 lat (msec) : 250=39.33%, 500=3.48% 00:25:49.336 cpu : usr=0.30%, sys=2.19%, ctx=1474, majf=0, minf=4097 00:25:49.336 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:25:49.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:49.336 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:49.336 issued rwts: total=5924,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:49.336 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:49.336 00:25:49.336 Run status group 0 (all jobs): 00:25:49.336 READ: bw=1687MiB/s (1769MB/s), 96.7MiB/s-233MiB/s (101MB/s-244MB/s), io=16.7GiB (17.9GB), run=10018-10121msec 00:25:49.336 00:25:49.336 Disk stats (read/write): 00:25:49.336 nvme0n1: ios=12652/0, merge=0/0, ticks=1237243/0, in_queue=1237243, util=97.21% 00:25:49.336 nvme10n1: ios=17959/0, merge=0/0, ticks=1238923/0, in_queue=1238923, util=97.43% 00:25:49.336 nvme1n1: ios=18653/0, merge=0/0, ticks=1236944/0, in_queue=1236944, util=97.72% 00:25:49.336 nvme2n1: ios=14594/0, merge=0/0, ticks=1234428/0, in_queue=1234428, util=97.84% 00:25:49.336 nvme3n1: ios=8398/0, merge=0/0, ticks=1225524/0, in_queue=1225524, util=97.95% 00:25:49.336 nvme4n1: ios=8526/0, merge=0/0, ticks=1222285/0, in_queue=1222285, util=98.28% 00:25:49.336 nvme5n1: ios=7670/0, merge=0/0, ticks=1222284/0, in_queue=1222284, util=98.43% 00:25:49.336 nvme6n1: ios=11535/0, merge=0/0, ticks=1230197/0, in_queue=1230197, util=98.53% 00:25:49.336 nvme7n1: ios=14891/0, merge=0/0, ticks=1230825/0, in_queue=1230825, util=98.92% 00:25:49.336 nvme8n1: ios=7955/0, merge=0/0, ticks=1226596/0, in_queue=1226596, util=99.08% 00:25:49.336 nvme9n1: ios=11536/0, merge=0/0, ticks=1232576/0, in_queue=1232576, util=99.23% 00:25:49.336 09:35:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:49.336 [global] 00:25:49.336 thread=1 00:25:49.336 invalidate=1 00:25:49.336 rw=randwrite 00:25:49.336 time_based=1 00:25:49.336 runtime=10 00:25:49.336 ioengine=libaio 00:25:49.336 direct=1 00:25:49.336 bs=262144 00:25:49.336 iodepth=64 00:25:49.336 norandommap=1 00:25:49.336 numjobs=1 00:25:49.336 00:25:49.336 [job0] 00:25:49.336 filename=/dev/nvme0n1 00:25:49.336 [job1] 00:25:49.336 filename=/dev/nvme10n1 00:25:49.336 [job2] 00:25:49.336 filename=/dev/nvme1n1 00:25:49.336 [job3] 00:25:49.336 filename=/dev/nvme2n1 00:25:49.336 [job4] 00:25:49.336 filename=/dev/nvme3n1 00:25:49.336 [job5] 00:25:49.336 filename=/dev/nvme4n1 00:25:49.336 [job6] 00:25:49.336 filename=/dev/nvme5n1 00:25:49.336 [job7] 00:25:49.336 filename=/dev/nvme6n1 00:25:49.336 [job8] 00:25:49.336 filename=/dev/nvme7n1 00:25:49.336 [job9] 00:25:49.336 filename=/dev/nvme8n1 00:25:49.336 [job10] 00:25:49.336 filename=/dev/nvme9n1 00:25:49.336 Could not set queue depth (nvme0n1) 00:25:49.336 Could not set queue depth (nvme10n1) 00:25:49.336 Could not set queue depth (nvme1n1) 00:25:49.336 Could not set queue depth (nvme2n1) 00:25:49.336 Could not set queue depth (nvme3n1) 00:25:49.336 Could not set queue depth (nvme4n1) 00:25:49.336 Could not set queue depth (nvme5n1) 00:25:49.336 Could not set queue depth (nvme6n1) 00:25:49.336 Could not set queue depth (nvme7n1) 00:25:49.336 Could not set queue depth (nvme8n1) 00:25:49.336 Could not set queue depth (nvme9n1) 00:25:49.336 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:49.336 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:49.336 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:49.336 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:49.336 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:49.336 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:49.336 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:49.336 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:49.336 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:49.336 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:49.336 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:49.336 fio-3.35 00:25:49.336 Starting 11 threads 00:25:59.304 00:25:59.304 job0: (groupid=0, jobs=1): err= 0: pid=802074: Sun Jul 14 09:35:42 2024 00:25:59.304 write: IOPS=289, BW=72.4MiB/s (76.0MB/s)(741MiB/10221msec); 0 zone resets 00:25:59.304 slat (usec): min=24, max=1082.7k, avg=2727.56, stdev=21190.76 00:25:59.304 clat (msec): min=4, max=1588, avg=217.89, stdev=209.19 00:25:59.304 lat (msec): min=4, max=1588, avg=220.62, stdev=210.46 00:25:59.304 clat percentiles (msec): 00:25:59.304 | 1.00th=[ 11], 5.00th=[ 20], 10.00th=[ 33], 20.00th=[ 69], 00:25:59.304 | 30.00th=[ 148], 40.00th=[ 178], 50.00th=[ 203], 60.00th=[ 228], 00:25:59.304 | 70.00th=[ 259], 80.00th=[ 288], 90.00th=[ 330], 95.00th=[ 439], 00:25:59.304 | 99.00th=[ 1385], 99.50th=[ 1552], 99.90th=[ 1586], 99.95th=[ 1586], 00:25:59.304 | 99.99th=[ 1586] 00:25:59.304 bw ( KiB/s): min=44544, max=167424, per=7.86%, avg=82411.39, stdev=31353.61, samples=18 00:25:59.304 iops : min= 174, max= 654, avg=321.89, stdev=122.47, samples=18 00:25:59.304 lat (msec) : 10=0.98%, 20=4.32%, 50=9.45%, 100=11.34%, 250=40.82% 00:25:59.304 lat (msec) : 500=30.96%, 2000=2.13% 00:25:59.304 cpu : usr=0.81%, sys=1.07%, ctx=1684, majf=0, minf=1 00:25:59.304 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:25:59.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.304 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:59.304 issued rwts: total=0,2962,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.304 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:59.304 job1: (groupid=0, jobs=1): err= 0: pid=802101: Sun Jul 14 09:35:42 2024 00:25:59.304 write: IOPS=360, BW=90.1MiB/s (94.5MB/s)(920MiB/10205msec); 0 zone resets 00:25:59.304 slat (usec): min=15, max=139834, avg=1859.85, stdev=6753.09 00:25:59.304 clat (msec): min=2, max=769, avg=175.56, stdev=137.25 00:25:59.304 lat (msec): min=3, max=769, avg=177.42, stdev=138.58 00:25:59.304 clat percentiles (msec): 00:25:59.304 | 1.00th=[ 14], 5.00th=[ 36], 10.00th=[ 50], 20.00th=[ 77], 00:25:59.304 | 30.00th=[ 84], 40.00th=[ 100], 50.00th=[ 121], 60.00th=[ 157], 00:25:59.304 | 70.00th=[ 230], 80.00th=[ 300], 90.00th=[ 376], 95.00th=[ 426], 00:25:59.304 | 99.00th=[ 693], 99.50th=[ 751], 99.90th=[ 768], 99.95th=[ 768], 00:25:59.304 | 99.99th=[ 768] 00:25:59.304 bw ( KiB/s): min=27136, max=198144, per=8.83%, avg=92557.35, stdev=52279.07, samples=20 00:25:59.304 iops : min= 106, max= 774, avg=361.50, stdev=204.15, samples=20 00:25:59.304 lat (msec) : 4=0.03%, 10=0.52%, 20=1.20%, 50=8.53%, 100=30.36% 00:25:59.304 lat (msec) : 250=34.93%, 500=22.21%, 750=1.58%, 1000=0.65% 00:25:59.304 cpu : usr=0.94%, sys=1.28%, ctx=2142, majf=0, minf=1 00:25:59.304 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:25:59.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.304 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:59.304 issued rwts: total=0,3679,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.304 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:59.304 job2: (groupid=0, jobs=1): err= 0: pid=802129: Sun Jul 14 09:35:42 2024 00:25:59.304 write: IOPS=270, BW=67.5MiB/s (70.8MB/s)(689MiB/10207msec); 0 zone resets 00:25:59.304 slat (usec): min=21, max=84291, avg=3625.20, stdev=7143.80 00:25:59.304 clat (msec): min=26, max=435, avg=233.25, stdev=77.42 00:25:59.304 lat (msec): min=26, max=435, avg=236.87, stdev=78.29 00:25:59.304 clat percentiles (msec): 00:25:59.304 | 1.00th=[ 77], 5.00th=[ 112], 10.00th=[ 130], 20.00th=[ 171], 00:25:59.304 | 30.00th=[ 190], 40.00th=[ 215], 50.00th=[ 228], 60.00th=[ 241], 00:25:59.304 | 70.00th=[ 264], 80.00th=[ 305], 90.00th=[ 351], 95.00th=[ 376], 00:25:59.304 | 99.00th=[ 409], 99.50th=[ 418], 99.90th=[ 422], 99.95th=[ 435], 00:25:59.304 | 99.99th=[ 435] 00:25:59.304 bw ( KiB/s): min=40960, max=130810, per=6.57%, avg=68901.05, stdev=21572.02, samples=20 00:25:59.304 iops : min= 160, max= 510, avg=269.05, stdev=84.11, samples=20 00:25:59.304 lat (msec) : 50=0.44%, 100=1.74%, 250=63.79%, 500=34.03% 00:25:59.304 cpu : usr=0.81%, sys=0.81%, ctx=738, majf=0, minf=1 00:25:59.304 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:25:59.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.304 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:59.304 issued rwts: total=0,2756,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.304 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:59.304 job3: (groupid=0, jobs=1): err= 0: pid=802146: Sun Jul 14 09:35:42 2024 00:25:59.304 write: IOPS=301, BW=75.5MiB/s (79.1MB/s)(767MiB/10161msec); 0 zone resets 00:25:59.304 slat (usec): min=25, max=106826, avg=2503.29, stdev=7008.87 00:25:59.304 clat (msec): min=4, max=962, avg=209.40, stdev=123.88 00:25:59.304 lat (msec): min=5, max=977, avg=211.90, stdev=125.16 00:25:59.304 clat percentiles (msec): 00:25:59.304 | 1.00th=[ 15], 5.00th=[ 33], 10.00th=[ 58], 20.00th=[ 124], 00:25:59.304 | 30.00th=[ 148], 40.00th=[ 163], 50.00th=[ 186], 60.00th=[ 218], 00:25:59.304 | 70.00th=[ 257], 80.00th=[ 309], 90.00th=[ 363], 95.00th=[ 401], 00:25:59.304 | 99.00th=[ 498], 99.50th=[ 927], 99.90th=[ 961], 99.95th=[ 961], 00:25:59.304 | 99.99th=[ 961] 00:25:59.304 bw ( KiB/s): min=40960, max=169984, per=7.34%, avg=76883.10, stdev=32843.63, samples=20 00:25:59.304 iops : min= 160, max= 664, avg=300.30, stdev=128.29, samples=20 00:25:59.304 lat (msec) : 10=0.20%, 20=1.79%, 50=7.34%, 100=6.23%, 250=52.82% 00:25:59.304 lat (msec) : 500=30.65%, 750=0.29%, 1000=0.68% 00:25:59.304 cpu : usr=0.83%, sys=1.08%, ctx=1619, majf=0, minf=1 00:25:59.304 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=97.9% 00:25:59.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.304 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:59.304 issued rwts: total=0,3067,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.304 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:59.304 job4: (groupid=0, jobs=1): err= 0: pid=802147: Sun Jul 14 09:35:42 2024 00:25:59.304 write: IOPS=522, BW=131MiB/s (137MB/s)(1315MiB/10060msec); 0 zone resets 00:25:59.304 slat (usec): min=18, max=125745, avg=1597.67, stdev=4473.85 00:25:59.304 clat (usec): min=1667, max=484883, avg=120721.17, stdev=90118.87 00:25:59.304 lat (usec): min=1713, max=484969, avg=122318.84, stdev=91237.42 00:25:59.304 clat percentiles (msec): 00:25:59.304 | 1.00th=[ 7], 5.00th=[ 21], 10.00th=[ 51], 20.00th=[ 62], 00:25:59.304 | 30.00th=[ 65], 40.00th=[ 79], 50.00th=[ 97], 60.00th=[ 113], 00:25:59.304 | 70.00th=[ 130], 80.00th=[ 157], 90.00th=[ 292], 95.00th=[ 342], 00:25:59.304 | 99.00th=[ 418], 99.50th=[ 430], 99.90th=[ 447], 99.95th=[ 456], 00:25:59.304 | 99.99th=[ 485] 00:25:59.304 bw ( KiB/s): min=47104, max=255488, per=12.69%, avg=133041.40, stdev=64771.22, samples=20 00:25:59.304 iops : min= 184, max= 998, avg=519.60, stdev=253.01, samples=20 00:25:59.304 lat (msec) : 2=0.06%, 4=0.32%, 10=1.60%, 20=2.93%, 50=4.94% 00:25:59.304 lat (msec) : 100=42.82%, 250=36.78%, 500=10.55% 00:25:59.304 cpu : usr=1.68%, sys=1.58%, ctx=2194, majf=0, minf=1 00:25:59.304 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:59.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.304 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:59.304 issued rwts: total=0,5261,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.304 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:59.304 job5: (groupid=0, jobs=1): err= 0: pid=802148: Sun Jul 14 09:35:42 2024 00:25:59.304 write: IOPS=248, BW=62.2MiB/s (65.3MB/s)(635MiB/10202msec); 0 zone resets 00:25:59.304 slat (usec): min=20, max=125308, avg=3136.92, stdev=9680.11 00:25:59.304 clat (msec): min=3, max=566, avg=253.38, stdev=136.03 00:25:59.304 lat (msec): min=3, max=566, avg=256.52, stdev=137.70 00:25:59.304 clat percentiles (msec): 00:25:59.304 | 1.00th=[ 9], 5.00th=[ 23], 10.00th=[ 43], 20.00th=[ 91], 00:25:59.304 | 30.00th=[ 190], 40.00th=[ 234], 50.00th=[ 259], 60.00th=[ 309], 00:25:59.304 | 70.00th=[ 355], 80.00th=[ 380], 90.00th=[ 418], 95.00th=[ 439], 00:25:59.304 | 99.00th=[ 506], 99.50th=[ 542], 99.90th=[ 558], 99.95th=[ 567], 00:25:59.304 | 99.99th=[ 567] 00:25:59.304 bw ( KiB/s): min=34816, max=136704, per=6.05%, avg=63401.05, stdev=24931.20, samples=20 00:25:59.304 iops : min= 136, max= 534, avg=247.60, stdev=97.43, samples=20 00:25:59.304 lat (msec) : 4=0.16%, 10=1.38%, 20=2.83%, 50=6.81%, 100=9.80% 00:25:59.304 lat (msec) : 250=27.48%, 500=50.47%, 750=1.06% 00:25:59.304 cpu : usr=0.64%, sys=0.89%, ctx=1341, majf=0, minf=1 00:25:59.304 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:25:59.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.304 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:59.304 issued rwts: total=0,2540,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.304 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:59.304 job6: (groupid=0, jobs=1): err= 0: pid=802149: Sun Jul 14 09:35:42 2024 00:25:59.304 write: IOPS=540, BW=135MiB/s (142MB/s)(1377MiB/10197msec); 0 zone resets 00:25:59.304 slat (usec): min=17, max=1011.5k, avg=1383.13, stdev=14154.56 00:25:59.304 clat (usec): min=1899, max=1367.7k, avg=116980.30, stdev=137693.18 00:25:59.304 lat (usec): min=1939, max=1367.7k, avg=118363.42, stdev=138399.27 00:25:59.304 clat percentiles (msec): 00:25:59.304 | 1.00th=[ 11], 5.00th=[ 39], 10.00th=[ 58], 20.00th=[ 63], 00:25:59.304 | 30.00th=[ 70], 40.00th=[ 78], 50.00th=[ 83], 60.00th=[ 96], 00:25:59.304 | 70.00th=[ 110], 80.00th=[ 136], 90.00th=[ 209], 95.00th=[ 245], 00:25:59.304 | 99.00th=[ 1150], 99.50th=[ 1267], 99.90th=[ 1351], 99.95th=[ 1368], 00:25:59.304 | 99.99th=[ 1368] 00:25:59.304 bw ( KiB/s): min= 7680, max=272896, per=14.00%, avg=146710.32, stdev=67877.97, samples=19 00:25:59.304 iops : min= 30, max= 1066, avg=573.05, stdev=265.19, samples=19 00:25:59.304 lat (msec) : 2=0.02%, 4=0.15%, 10=0.83%, 20=1.47%, 50=3.85% 00:25:59.304 lat (msec) : 100=56.38%, 250=32.84%, 500=3.32%, 2000=1.14% 00:25:59.304 cpu : usr=1.47%, sys=1.85%, ctx=2379, majf=0, minf=1 00:25:59.304 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:59.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.304 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:59.304 issued rwts: total=0,5509,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.304 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:59.304 job7: (groupid=0, jobs=1): err= 0: pid=802150: Sun Jul 14 09:35:42 2024 00:25:59.304 write: IOPS=462, BW=116MiB/s (121MB/s)(1180MiB/10204msec); 0 zone resets 00:25:59.304 slat (usec): min=18, max=171972, avg=1469.94, stdev=5533.75 00:25:59.304 clat (msec): min=2, max=544, avg=136.84, stdev=105.82 00:25:59.304 lat (msec): min=2, max=570, avg=138.31, stdev=107.09 00:25:59.304 clat percentiles (msec): 00:25:59.304 | 1.00th=[ 5], 5.00th=[ 17], 10.00th=[ 27], 20.00th=[ 45], 00:25:59.304 | 30.00th=[ 75], 40.00th=[ 91], 50.00th=[ 107], 60.00th=[ 113], 00:25:59.305 | 70.00th=[ 159], 80.00th=[ 243], 90.00th=[ 292], 95.00th=[ 342], 00:25:59.305 | 99.00th=[ 430], 99.50th=[ 481], 99.90th=[ 510], 99.95th=[ 518], 00:25:59.305 | 99.99th=[ 542] 00:25:59.305 bw ( KiB/s): min=54784, max=243225, per=11.37%, avg=119172.45, stdev=50218.50, samples=20 00:25:59.305 iops : min= 214, max= 950, avg=465.50, stdev=196.15, samples=20 00:25:59.305 lat (msec) : 4=0.59%, 10=2.23%, 20=4.05%, 50=15.11%, 100=22.34% 00:25:59.305 lat (msec) : 250=37.55%, 500=17.86%, 750=0.28% 00:25:59.305 cpu : usr=1.41%, sys=1.47%, ctx=2891, majf=0, minf=1 00:25:59.305 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:59.305 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.305 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:59.305 issued rwts: total=0,4719,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.305 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:59.305 job8: (groupid=0, jobs=1): err= 0: pid=802151: Sun Jul 14 09:35:42 2024 00:25:59.305 write: IOPS=425, BW=106MiB/s (112MB/s)(1085MiB/10190msec); 0 zone resets 00:25:59.305 slat (usec): min=19, max=1124.0k, avg=2034.51, stdev=18095.15 00:25:59.305 clat (msec): min=2, max=1631, avg=148.22, stdev=188.34 00:25:59.305 lat (msec): min=2, max=1631, avg=150.26, stdev=189.76 00:25:59.305 clat percentiles (msec): 00:25:59.305 | 1.00th=[ 9], 5.00th=[ 25], 10.00th=[ 47], 20.00th=[ 82], 00:25:59.305 | 30.00th=[ 88], 40.00th=[ 97], 50.00th=[ 110], 60.00th=[ 126], 00:25:59.305 | 70.00th=[ 148], 80.00th=[ 174], 90.00th=[ 247], 95.00th=[ 300], 00:25:59.305 | 99.00th=[ 1569], 99.50th=[ 1603], 99.90th=[ 1620], 99.95th=[ 1636], 00:25:59.305 | 99.99th=[ 1636] 00:25:59.305 bw ( KiB/s): min=55296, max=192512, per=11.60%, avg=121536.50, stdev=42909.20, samples=18 00:25:59.305 iops : min= 216, max= 752, avg=474.67, stdev=167.54, samples=18 00:25:59.305 lat (msec) : 4=0.16%, 10=1.08%, 20=2.77%, 50=6.43%, 100=31.74% 00:25:59.305 lat (msec) : 250=48.43%, 500=7.93%, 2000=1.45% 00:25:59.305 cpu : usr=1.22%, sys=1.34%, ctx=1868, majf=0, minf=1 00:25:59.305 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:25:59.305 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.305 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:59.305 issued rwts: total=0,4338,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.305 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:59.305 job9: (groupid=0, jobs=1): err= 0: pid=802152: Sun Jul 14 09:35:42 2024 00:25:59.305 write: IOPS=369, BW=92.3MiB/s (96.7MB/s)(942MiB/10210msec); 0 zone resets 00:25:59.305 slat (usec): min=22, max=1235.9k, avg=1638.09, stdev=22273.34 00:25:59.305 clat (msec): min=2, max=1596, avg=171.62, stdev=220.41 00:25:59.305 lat (msec): min=2, max=1596, avg=173.26, stdev=222.06 00:25:59.305 clat percentiles (msec): 00:25:59.305 | 1.00th=[ 10], 5.00th=[ 33], 10.00th=[ 54], 20.00th=[ 72], 00:25:59.305 | 30.00th=[ 89], 40.00th=[ 110], 50.00th=[ 128], 60.00th=[ 142], 00:25:59.305 | 70.00th=[ 159], 80.00th=[ 190], 90.00th=[ 264], 95.00th=[ 376], 00:25:59.305 | 99.00th=[ 1485], 99.50th=[ 1552], 99.90th=[ 1586], 99.95th=[ 1603], 00:25:59.305 | 99.99th=[ 1603] 00:25:59.305 bw ( KiB/s): min=20480, max=169984, per=10.05%, avg=105344.89, stdev=42560.22, samples=18 00:25:59.305 iops : min= 80, max= 664, avg=411.44, stdev=166.27, samples=18 00:25:59.305 lat (msec) : 4=0.13%, 10=0.90%, 20=1.94%, 50=5.68%, 100=26.99% 00:25:59.305 lat (msec) : 250=52.26%, 500=7.67%, 750=1.59%, 1000=0.56%, 2000=2.28% 00:25:59.305 cpu : usr=1.06%, sys=1.20%, ctx=2418, majf=0, minf=1 00:25:59.305 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:25:59.305 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.305 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:59.305 issued rwts: total=0,3768,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.305 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:59.305 job10: (groupid=0, jobs=1): err= 0: pid=802153: Sun Jul 14 09:35:42 2024 00:25:59.305 write: IOPS=318, BW=79.5MiB/s (83.4MB/s)(812MiB/10207msec); 0 zone resets 00:25:59.305 slat (usec): min=15, max=159691, avg=2659.39, stdev=7655.45 00:25:59.305 clat (msec): min=8, max=525, avg=198.46, stdev=114.61 00:25:59.305 lat (msec): min=12, max=525, avg=201.12, stdev=116.25 00:25:59.305 clat percentiles (msec): 00:25:59.305 | 1.00th=[ 23], 5.00th=[ 36], 10.00th=[ 65], 20.00th=[ 97], 00:25:59.305 | 30.00th=[ 111], 40.00th=[ 153], 50.00th=[ 190], 60.00th=[ 226], 00:25:59.305 | 70.00th=[ 243], 80.00th=[ 292], 90.00th=[ 376], 95.00th=[ 418], 00:25:59.305 | 99.00th=[ 514], 99.50th=[ 518], 99.90th=[ 523], 99.95th=[ 527], 00:25:59.305 | 99.99th=[ 527] 00:25:59.305 bw ( KiB/s): min=30720, max=180224, per=7.77%, avg=81471.15, stdev=39028.99, samples=20 00:25:59.305 iops : min= 120, max= 704, avg=318.15, stdev=152.38, samples=20 00:25:59.305 lat (msec) : 10=0.03%, 20=0.31%, 50=7.33%, 100=13.80%, 250=50.12% 00:25:59.305 lat (msec) : 500=27.11%, 750=1.29% 00:25:59.305 cpu : usr=0.89%, sys=1.15%, ctx=1508, majf=0, minf=1 00:25:59.305 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:25:59.305 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.305 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:59.305 issued rwts: total=0,3246,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.305 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:59.305 00:25:59.305 Run status group 0 (all jobs): 00:25:59.305 WRITE: bw=1024MiB/s (1073MB/s), 62.2MiB/s-135MiB/s (65.3MB/s-142MB/s), io=10.2GiB (11.0GB), run=10060-10221msec 00:25:59.305 00:25:59.305 Disk stats (read/write): 00:25:59.305 nvme0n1: ios=46/5849, merge=0/0, ticks=1587/1158288, in_queue=1159875, util=99.44% 00:25:59.305 nvme10n1: ios=49/7323, merge=0/0, ticks=198/1244546, in_queue=1244744, util=98.78% 00:25:59.305 nvme1n1: ios=49/5473, merge=0/0, ticks=43/1228676, in_queue=1228719, util=97.76% 00:25:59.305 nvme2n1: ios=40/5961, merge=0/0, ticks=59/1187562, in_queue=1187621, util=97.92% 00:25:59.305 nvme3n1: ios=20/10227, merge=0/0, ticks=43/1215908, in_queue=1215951, util=97.79% 00:25:59.305 nvme4n1: ios=47/5047, merge=0/0, ticks=780/1234778, in_queue=1235558, util=99.97% 00:25:59.305 nvme5n1: ios=45/10986, merge=0/0, ticks=1835/1214868, in_queue=1216703, util=100.00% 00:25:59.305 nvme6n1: ios=0/9404, merge=0/0, ticks=0/1243872, in_queue=1243872, util=98.41% 00:25:59.305 nvme7n1: ios=44/8650, merge=0/0, ticks=118/1208993, in_queue=1209111, util=99.14% 00:25:59.305 nvme8n1: ios=41/7477, merge=0/0, ticks=1500/1179689, in_queue=1181189, util=100.00% 00:25:59.305 nvme9n1: ios=0/6454, merge=0/0, ticks=0/1235507, in_queue=1235507, util=99.11% 00:25:59.305 09:35:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:25:59.305 09:35:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:25:59.305 09:35:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:59.305 09:35:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:59.305 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:59.305 09:35:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:59.305 09:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:59.305 09:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:59.305 09:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:25:59.305 09:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:59.305 09:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:25:59.305 09:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:59.305 09:35:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:59.305 09:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.305 09:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.305 09:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.305 09:35:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:59.305 09:35:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:59.305 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:59.305 09:35:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:59.305 09:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:59.305 09:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:59.305 09:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:25:59.305 09:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:59.305 09:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:25:59.305 09:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:59.305 09:35:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:59.305 09:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.305 09:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.305 09:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.305 09:35:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:59.305 09:35:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:59.564 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:59.564 09:35:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:59.564 09:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:59.822 09:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:59.822 09:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:25:59.822 09:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:59.822 09:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:25:59.822 09:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:59.822 09:35:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:59.822 09:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.822 09:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.822 09:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.822 09:35:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:59.822 09:35:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:00.080 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:00.080 09:35:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:00.080 09:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:00.080 09:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:00.080 09:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:26:00.080 09:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:00.080 09:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:26:00.080 09:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:00.080 09:35:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:00.080 09:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.080 09:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.080 09:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.080 09:35:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:00.080 09:35:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:00.338 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:00.338 09:35:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:00.338 09:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:00.338 09:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:00.338 09:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:26:00.338 09:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:00.338 09:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:26:00.338 09:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:00.338 09:35:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:00.338 09:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.338 09:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.338 09:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.338 09:35:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:00.338 09:35:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:00.597 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:00.597 09:35:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:00.597 09:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:00.597 09:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:00.597 09:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:26:00.597 09:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:00.597 09:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:26:00.597 09:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:00.597 09:35:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:00.597 09:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.597 09:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.597 09:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.597 09:35:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:00.597 09:35:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:00.597 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:00.597 09:35:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:00.597 09:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:00.597 09:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:00.597 09:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:26:00.597 09:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:00.597 09:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:26:00.597 09:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:00.597 09:35:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:00.597 09:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.597 09:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.597 09:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.597 09:35:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:00.597 09:35:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:00.856 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:00.856 09:35:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:00.856 09:35:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:00.856 09:35:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:00.856 09:35:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:26:00.856 09:35:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:00.856 09:35:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:26:00.856 09:35:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:00.856 09:35:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:00.856 09:35:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.856 09:35:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.856 09:35:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.856 09:35:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:00.856 09:35:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:00.856 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:00.856 09:35:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:00.856 09:35:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:00.856 09:35:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:00.856 09:35:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:26:00.856 09:35:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:00.856 09:35:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:26:00.856 09:35:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:00.856 09:35:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:00.856 09:35:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.856 09:35:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.856 09:35:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.856 09:35:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:00.856 09:35:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:01.115 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:01.115 09:35:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:01.115 09:35:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:01.115 09:35:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:01.115 09:35:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:26:01.115 09:35:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:01.115 09:35:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:26:01.115 09:35:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:01.115 09:35:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:01.115 09:35:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.115 09:35:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:01.115 09:35:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.115 09:35:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:01.115 09:35:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:01.115 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:01.115 09:35:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:01.115 09:35:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:01.115 09:35:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:01.115 09:35:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:26:01.115 09:35:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:01.116 09:35:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:26:01.116 09:35:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:01.116 09:35:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:01.116 09:35:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.116 09:35:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:01.116 09:35:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.116 09:35:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:01.116 09:35:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:01.116 09:35:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:01.116 09:35:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:01.116 09:35:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:26:01.116 09:35:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:01.116 09:35:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:26:01.116 09:35:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:01.116 09:35:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:01.116 rmmod nvme_tcp 00:26:01.116 rmmod nvme_fabrics 00:26:01.116 rmmod nvme_keyring 00:26:01.116 09:35:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:01.375 09:35:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:26:01.375 09:35:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:26:01.375 09:35:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 796701 ']' 00:26:01.375 09:35:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 796701 00:26:01.375 09:35:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 796701 ']' 00:26:01.375 09:35:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 796701 00:26:01.375 09:35:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 00:26:01.375 09:35:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:01.375 09:35:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 796701 00:26:01.375 09:35:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:01.375 09:35:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:01.375 09:35:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 796701' 00:26:01.375 killing process with pid 796701 00:26:01.375 09:35:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 796701 00:26:01.375 09:35:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 796701 00:26:01.943 09:35:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:01.943 09:35:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:01.943 09:35:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:01.943 09:35:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:01.943 09:35:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:01.943 09:35:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:01.943 09:35:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:01.943 09:35:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:03.846 09:35:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:03.846 00:26:03.846 real 1m0.472s 00:26:03.846 user 3m22.614s 00:26:03.846 sys 0m21.670s 00:26:03.846 09:35:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:03.846 09:35:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.846 ************************************ 00:26:03.846 END TEST nvmf_multiconnection 00:26:03.846 ************************************ 00:26:03.846 09:35:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:03.846 09:35:48 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:03.846 09:35:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:03.846 09:35:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:03.846 09:35:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:03.846 ************************************ 00:26:03.846 START TEST nvmf_initiator_timeout 00:26:03.846 ************************************ 00:26:03.846 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:04.104 * Looking for test storage... 00:26:04.104 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:04.104 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:04.104 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:04.104 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:04.105 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:04.105 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:04.105 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:04.105 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:04.105 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:04.105 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:04.105 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:04.105 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:04.105 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:04.105 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:04.105 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:04.105 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:04.105 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:04.105 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:04.105 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:04.105 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:04.105 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:04.105 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:04.105 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:04.105 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.105 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.105 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.105 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:04.105 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.105 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:26:04.105 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:04.105 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:04.105 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:04.105 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:04.105 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:04.105 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:04.105 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:04.105 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:04.105 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:04.105 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:04.105 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:04.105 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:04.105 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:04.105 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:04.105 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:04.105 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:04.105 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:04.105 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:04.105 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:04.105 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:04.105 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:04.105 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:26:04.105 09:35:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:06.007 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:06.007 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:06.007 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:06.007 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:06.007 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:06.007 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:26:06.007 00:26:06.007 --- 10.0.0.2 ping statistics --- 00:26:06.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:06.007 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:06.007 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:06.007 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:26:06.007 00:26:06.007 --- 10.0.0.1 ping statistics --- 00:26:06.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:06.007 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=805342 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 805342 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 805342 ']' 00:26:06.007 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:06.008 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:06.008 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:06.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:06.008 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:06.008 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:06.008 [2024-07-14 09:35:50.389734] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:26:06.008 [2024-07-14 09:35:50.389810] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:06.008 EAL: No free 2048 kB hugepages reported on node 1 00:26:06.008 [2024-07-14 09:35:50.451989] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:06.265 [2024-07-14 09:35:50.541965] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:06.265 [2024-07-14 09:35:50.542015] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:06.265 [2024-07-14 09:35:50.542044] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:06.265 [2024-07-14 09:35:50.542056] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:06.265 [2024-07-14 09:35:50.542066] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:06.265 [2024-07-14 09:35:50.542124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:06.265 [2024-07-14 09:35:50.542244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:06.265 [2024-07-14 09:35:50.542310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:06.265 [2024-07-14 09:35:50.542313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:06.265 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:06.265 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 00:26:06.265 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:06.265 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:06.265 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:06.265 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:06.265 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:06.265 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:06.265 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.265 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:06.524 Malloc0 00:26:06.524 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.524 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:06.524 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.524 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:06.524 Delay0 00:26:06.524 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.524 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:06.524 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.524 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:06.524 [2024-07-14 09:35:50.732805] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:06.524 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.524 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:06.524 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.524 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:06.524 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.524 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:06.524 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.524 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:06.524 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.524 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:06.524 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.524 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:06.524 [2024-07-14 09:35:50.761087] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:06.524 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.524 09:35:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:07.090 09:35:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:07.090 09:35:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:26:07.090 09:35:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:07.090 09:35:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:07.090 09:35:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:26:09.617 09:35:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:09.617 09:35:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:09.617 09:35:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:26:09.617 09:35:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:09.617 09:35:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:09.617 09:35:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:26:09.617 09:35:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=805764 00:26:09.617 09:35:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:09.617 09:35:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:09.617 [global] 00:26:09.617 thread=1 00:26:09.617 invalidate=1 00:26:09.617 rw=write 00:26:09.617 time_based=1 00:26:09.617 runtime=60 00:26:09.617 ioengine=libaio 00:26:09.617 direct=1 00:26:09.617 bs=4096 00:26:09.617 iodepth=1 00:26:09.617 norandommap=0 00:26:09.617 numjobs=1 00:26:09.617 00:26:09.617 verify_dump=1 00:26:09.617 verify_backlog=512 00:26:09.617 verify_state_save=0 00:26:09.617 do_verify=1 00:26:09.617 verify=crc32c-intel 00:26:09.617 [job0] 00:26:09.617 filename=/dev/nvme0n1 00:26:09.617 Could not set queue depth (nvme0n1) 00:26:09.617 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:09.617 fio-3.35 00:26:09.617 Starting 1 thread 00:26:12.142 09:35:56 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:12.142 09:35:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.142 09:35:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:12.142 true 00:26:12.142 09:35:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.142 09:35:56 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:12.142 09:35:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.142 09:35:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:12.142 true 00:26:12.142 09:35:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.142 09:35:56 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:12.142 09:35:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.142 09:35:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:12.142 true 00:26:12.142 09:35:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.142 09:35:56 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:12.142 09:35:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.142 09:35:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:12.142 true 00:26:12.142 09:35:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.142 09:35:56 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:15.453 09:35:59 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:15.453 09:35:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.453 09:35:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:15.453 true 00:26:15.453 09:35:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.453 09:35:59 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:15.453 09:35:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.453 09:35:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:15.453 true 00:26:15.453 09:35:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.453 09:35:59 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:15.453 09:35:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.453 09:35:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:15.453 true 00:26:15.453 09:35:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.453 09:35:59 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:15.453 09:35:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.453 09:35:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:15.453 true 00:26:15.453 09:35:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.453 09:35:59 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:15.453 09:35:59 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 805764 00:27:11.657 00:27:11.657 job0: (groupid=0, jobs=1): err= 0: pid=805833: Sun Jul 14 09:36:53 2024 00:27:11.657 read: IOPS=39, BW=158KiB/s (162kB/s)(9512KiB/60037msec) 00:27:11.657 slat (usec): min=7, max=15273, avg=34.50, stdev=364.96 00:27:11.657 clat (usec): min=459, max=41186k, avg=24847.17, stdev=844563.84 00:27:11.657 lat (usec): min=475, max=41186k, avg=24881.67, stdev=844563.41 00:27:11.657 clat percentiles (usec): 00:27:11.657 | 1.00th=[ 474], 5.00th=[ 486], 10.00th=[ 494], 00:27:11.657 | 20.00th=[ 506], 30.00th=[ 529], 40.00th=[ 545], 00:27:11.657 | 50.00th=[ 562], 60.00th=[ 570], 70.00th=[ 586], 00:27:11.657 | 80.00th=[ 644], 90.00th=[ 41157], 95.00th=[ 41157], 00:27:11.657 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[ 42206], 00:27:11.657 | 99.95th=[ 42730], 99.99th=[17112761] 00:27:11.657 write: IOPS=42, BW=171KiB/s (175kB/s)(10.0MiB/60037msec); 0 zone resets 00:27:11.657 slat (nsec): min=7257, max=76083, avg=24001.46, stdev=11181.12 00:27:11.657 clat (usec): min=241, max=449, avg=302.39, stdev=32.49 00:27:11.657 lat (usec): min=251, max=477, avg=326.40, stdev=38.39 00:27:11.657 clat percentiles (usec): 00:27:11.657 | 1.00th=[ 249], 5.00th=[ 258], 10.00th=[ 265], 20.00th=[ 273], 00:27:11.657 | 30.00th=[ 285], 40.00th=[ 289], 50.00th=[ 302], 60.00th=[ 306], 00:27:11.657 | 70.00th=[ 318], 80.00th=[ 326], 90.00th=[ 351], 95.00th=[ 363], 00:27:11.657 | 99.00th=[ 388], 99.50th=[ 392], 99.90th=[ 433], 99.95th=[ 441], 00:27:11.657 | 99.99th=[ 449] 00:27:11.657 bw ( KiB/s): min= 2432, max= 5536, per=100.00%, avg=4096.00, stdev=1105.97, samples=5 00:27:11.657 iops : min= 608, max= 1384, avg=1024.00, stdev=276.49, samples=5 00:27:11.657 lat (usec) : 250=0.71%, 500=58.32%, 750=32.62% 00:27:11.657 lat (msec) : 4=0.04%, 50=8.28%, >=2000=0.02% 00:27:11.657 cpu : usr=0.11%, sys=0.20%, ctx=4940, majf=0, minf=2 00:27:11.657 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:11.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.657 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.657 issued rwts: total=2378,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:11.657 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:11.657 00:27:11.657 Run status group 0 (all jobs): 00:27:11.657 READ: bw=158KiB/s (162kB/s), 158KiB/s-158KiB/s (162kB/s-162kB/s), io=9512KiB (9740kB), run=60037-60037msec 00:27:11.657 WRITE: bw=171KiB/s (175kB/s), 171KiB/s-171KiB/s (175kB/s-175kB/s), io=10.0MiB (10.5MB), run=60037-60037msec 00:27:11.657 00:27:11.657 Disk stats (read/write): 00:27:11.658 nvme0n1: ios=2473/2560, merge=0/0, ticks=18961/741, in_queue=19702, util=99.91% 00:27:11.658 09:36:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:11.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:11.658 09:36:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:11.658 09:36:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:27:11.658 09:36:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:11.658 09:36:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:11.658 09:36:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:11.658 09:36:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:11.658 09:36:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:27:11.658 09:36:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:11.658 09:36:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:11.658 nvmf hotplug test: fio successful as expected 00:27:11.658 09:36:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:11.658 09:36:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.658 09:36:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:11.658 09:36:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.658 09:36:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:11.658 09:36:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:11.658 09:36:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:11.658 09:36:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:11.658 09:36:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:27:11.658 09:36:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:11.658 09:36:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:27:11.658 09:36:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:11.658 09:36:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:11.658 rmmod nvme_tcp 00:27:11.658 rmmod nvme_fabrics 00:27:11.658 rmmod nvme_keyring 00:27:11.658 09:36:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:11.658 09:36:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:27:11.658 09:36:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:27:11.658 09:36:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 805342 ']' 00:27:11.658 09:36:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 805342 00:27:11.658 09:36:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 805342 ']' 00:27:11.658 09:36:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 805342 00:27:11.658 09:36:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 00:27:11.658 09:36:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:11.658 09:36:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 805342 00:27:11.658 09:36:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:11.658 09:36:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:11.658 09:36:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 805342' 00:27:11.658 killing process with pid 805342 00:27:11.658 09:36:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 805342 00:27:11.658 09:36:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 805342 00:27:11.658 09:36:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:11.658 09:36:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:11.658 09:36:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:11.658 09:36:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:11.658 09:36:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:11.658 09:36:54 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:11.658 09:36:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:11.658 09:36:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:11.917 09:36:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:11.917 00:27:11.917 real 1m8.113s 00:27:11.917 user 4m10.992s 00:27:11.917 sys 0m6.520s 00:27:11.917 09:36:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:11.917 09:36:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:11.917 ************************************ 00:27:11.917 END TEST nvmf_initiator_timeout 00:27:11.917 ************************************ 00:27:12.176 09:36:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:12.176 09:36:56 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:27:12.176 09:36:56 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:27:12.176 09:36:56 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:27:12.176 09:36:56 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:27:12.176 09:36:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:14.080 09:36:58 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:14.080 09:36:58 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:27:14.080 09:36:58 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:14.080 09:36:58 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:14.080 09:36:58 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:14.080 09:36:58 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:14.080 09:36:58 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:14.080 09:36:58 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:27:14.080 09:36:58 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:14.080 09:36:58 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:27:14.080 09:36:58 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:27:14.080 09:36:58 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:27:14.080 09:36:58 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:27:14.080 09:36:58 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:27:14.080 09:36:58 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:27:14.080 09:36:58 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:14.080 09:36:58 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:14.080 09:36:58 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:14.080 09:36:58 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:14.080 09:36:58 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:14.080 09:36:58 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:14.080 09:36:58 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:14.080 09:36:58 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:14.080 09:36:58 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:14.081 09:36:58 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:14.081 09:36:58 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:14.081 09:36:58 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:14.081 09:36:58 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:14.081 09:36:58 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:14.081 09:36:58 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:14.081 09:36:58 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:14.081 09:36:58 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:14.081 09:36:58 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:14.081 09:36:58 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:14.081 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:14.081 09:36:58 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:14.081 09:36:58 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:14.081 09:36:58 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:14.081 09:36:58 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:14.081 09:36:58 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:14.081 09:36:58 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:14.081 09:36:58 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:14.081 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:14.081 09:36:58 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:14.081 09:36:58 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:14.081 09:36:58 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:14.081 09:36:58 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:14.081 09:36:58 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:14.081 09:36:58 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:14.081 09:36:58 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:14.081 09:36:58 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:14.081 09:36:58 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:14.081 09:36:58 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:14.081 09:36:58 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:14.081 09:36:58 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:14.081 09:36:58 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:14.081 09:36:58 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:14.081 09:36:58 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:14.081 09:36:58 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:14.081 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:14.081 09:36:58 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:14.081 09:36:58 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:14.081 09:36:58 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:14.081 09:36:58 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:14.081 09:36:58 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:14.081 09:36:58 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:14.081 09:36:58 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:14.081 09:36:58 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:14.081 09:36:58 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:14.081 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:14.081 09:36:58 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:14.081 09:36:58 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:14.081 09:36:58 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:14.081 09:36:58 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:27:14.081 09:36:58 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:14.081 09:36:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:14.081 09:36:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:14.081 09:36:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:14.081 ************************************ 00:27:14.081 START TEST nvmf_perf_adq 00:27:14.081 ************************************ 00:27:14.081 09:36:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:14.081 * Looking for test storage... 00:27:14.081 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:14.081 09:36:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:14.081 09:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:27:14.081 09:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:14.081 09:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:14.081 09:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:14.081 09:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:14.081 09:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:14.081 09:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:14.081 09:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:14.081 09:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:14.081 09:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:14.081 09:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:14.081 09:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:14.081 09:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:14.081 09:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:14.081 09:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:14.081 09:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:14.081 09:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:14.081 09:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:14.081 09:36:58 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:14.081 09:36:58 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:14.081 09:36:58 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:14.081 09:36:58 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.081 09:36:58 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.081 09:36:58 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.081 09:36:58 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:27:14.081 09:36:58 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.081 09:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:27:14.081 09:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:14.081 09:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:14.081 09:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:14.081 09:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:14.081 09:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:14.081 09:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:14.081 09:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:14.081 09:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:14.081 09:36:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:27:14.081 09:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:14.081 09:36:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:15.990 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:15.990 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:15.990 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:15.990 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:27:15.990 09:37:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:16.922 09:37:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:18.824 09:37:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:24.089 09:37:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:27:24.089 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:24.089 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:24.089 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:24.089 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:24.089 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:24.089 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:24.089 09:37:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:24.089 09:37:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:24.090 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:24.090 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:24.090 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:24.090 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:24.090 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:24.090 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:27:24.090 00:27:24.090 --- 10.0.0.2 ping statistics --- 00:27:24.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:24.090 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:24.090 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:24.090 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:27:24.090 00:27:24.090 --- 10.0.0.1 ping statistics --- 00:27:24.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:24.090 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=817962 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 817962 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 817962 ']' 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:24.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:24.090 09:37:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:24.090 [2024-07-14 09:37:08.270011] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:27:24.091 [2024-07-14 09:37:08.270080] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:24.091 EAL: No free 2048 kB hugepages reported on node 1 00:27:24.091 [2024-07-14 09:37:08.331644] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:24.091 [2024-07-14 09:37:08.420662] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:24.091 [2024-07-14 09:37:08.420735] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:24.091 [2024-07-14 09:37:08.420755] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:24.091 [2024-07-14 09:37:08.420772] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:24.091 [2024-07-14 09:37:08.420786] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:24.091 [2024-07-14 09:37:08.420880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:24.091 [2024-07-14 09:37:08.420942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:24.091 [2024-07-14 09:37:08.421007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:24.091 [2024-07-14 09:37:08.421012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:24.091 09:37:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:24.091 09:37:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:27:24.091 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:24.091 09:37:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:24.091 09:37:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:24.091 09:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:24.091 09:37:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:27:24.091 09:37:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:24.091 09:37:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:24.091 09:37:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.091 09:37:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:24.091 09:37:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.349 09:37:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:24.349 09:37:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:27:24.349 09:37:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.349 09:37:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:24.349 09:37:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.349 09:37:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:24.349 09:37:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.349 09:37:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:24.349 09:37:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.349 09:37:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:27:24.349 09:37:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.349 09:37:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:24.349 [2024-07-14 09:37:08.671984] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:24.349 09:37:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.349 09:37:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:24.349 09:37:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.349 09:37:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:24.349 Malloc1 00:27:24.349 09:37:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.349 09:37:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:24.349 09:37:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.349 09:37:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:24.349 09:37:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.349 09:37:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:24.349 09:37:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.349 09:37:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:24.349 09:37:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.349 09:37:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:24.349 09:37:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.349 09:37:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:24.349 [2024-07-14 09:37:08.725486] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:24.349 09:37:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.349 09:37:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=817989 00:27:24.349 09:37:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:27:24.349 09:37:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:24.349 EAL: No free 2048 kB hugepages reported on node 1 00:27:26.877 09:37:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:27:26.877 09:37:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.877 09:37:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:26.877 09:37:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.877 09:37:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:27:26.878 "tick_rate": 2700000000, 00:27:26.878 "poll_groups": [ 00:27:26.878 { 00:27:26.878 "name": "nvmf_tgt_poll_group_000", 00:27:26.878 "admin_qpairs": 1, 00:27:26.878 "io_qpairs": 1, 00:27:26.878 "current_admin_qpairs": 1, 00:27:26.878 "current_io_qpairs": 1, 00:27:26.878 "pending_bdev_io": 0, 00:27:26.878 "completed_nvme_io": 21182, 00:27:26.878 "transports": [ 00:27:26.878 { 00:27:26.878 "trtype": "TCP" 00:27:26.878 } 00:27:26.878 ] 00:27:26.878 }, 00:27:26.878 { 00:27:26.878 "name": "nvmf_tgt_poll_group_001", 00:27:26.878 "admin_qpairs": 0, 00:27:26.878 "io_qpairs": 1, 00:27:26.878 "current_admin_qpairs": 0, 00:27:26.878 "current_io_qpairs": 1, 00:27:26.878 "pending_bdev_io": 0, 00:27:26.878 "completed_nvme_io": 20932, 00:27:26.878 "transports": [ 00:27:26.878 { 00:27:26.878 "trtype": "TCP" 00:27:26.878 } 00:27:26.878 ] 00:27:26.878 }, 00:27:26.878 { 00:27:26.878 "name": "nvmf_tgt_poll_group_002", 00:27:26.878 "admin_qpairs": 0, 00:27:26.878 "io_qpairs": 1, 00:27:26.878 "current_admin_qpairs": 0, 00:27:26.878 "current_io_qpairs": 1, 00:27:26.878 "pending_bdev_io": 0, 00:27:26.878 "completed_nvme_io": 15142, 00:27:26.878 "transports": [ 00:27:26.878 { 00:27:26.878 "trtype": "TCP" 00:27:26.878 } 00:27:26.878 ] 00:27:26.878 }, 00:27:26.878 { 00:27:26.878 "name": "nvmf_tgt_poll_group_003", 00:27:26.878 "admin_qpairs": 0, 00:27:26.878 "io_qpairs": 1, 00:27:26.878 "current_admin_qpairs": 0, 00:27:26.878 "current_io_qpairs": 1, 00:27:26.878 "pending_bdev_io": 0, 00:27:26.878 "completed_nvme_io": 21382, 00:27:26.878 "transports": [ 00:27:26.878 { 00:27:26.878 "trtype": "TCP" 00:27:26.878 } 00:27:26.878 ] 00:27:26.878 } 00:27:26.878 ] 00:27:26.878 }' 00:27:26.878 09:37:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:27:26.878 09:37:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:27:26.878 09:37:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:27:26.878 09:37:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:27:26.878 09:37:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 817989 00:27:34.984 Initializing NVMe Controllers 00:27:34.984 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:34.984 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:34.984 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:34.984 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:34.984 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:34.984 Initialization complete. Launching workers. 00:27:34.984 ======================================================== 00:27:34.984 Latency(us) 00:27:34.984 Device Information : IOPS MiB/s Average min max 00:27:34.984 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7900.37 30.86 8103.12 2021.23 20229.55 00:27:34.984 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10947.44 42.76 5846.28 2029.04 8491.90 00:27:34.984 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10959.54 42.81 5839.09 1880.54 8205.48 00:27:34.984 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10844.65 42.36 5901.61 1304.34 9130.69 00:27:34.984 ======================================================== 00:27:34.984 Total : 40652.00 158.80 6297.70 1304.34 20229.55 00:27:34.984 00:27:34.984 09:37:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:27:34.984 09:37:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:34.984 09:37:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:34.984 09:37:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:34.984 09:37:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:34.984 09:37:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:34.984 09:37:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:34.984 rmmod nvme_tcp 00:27:34.984 rmmod nvme_fabrics 00:27:34.984 rmmod nvme_keyring 00:27:34.984 09:37:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:34.984 09:37:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:34.984 09:37:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:34.984 09:37:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 817962 ']' 00:27:34.984 09:37:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 817962 00:27:34.984 09:37:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 817962 ']' 00:27:34.984 09:37:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 817962 00:27:34.984 09:37:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:27:34.984 09:37:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:34.984 09:37:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 817962 00:27:34.984 09:37:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:34.984 09:37:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:34.984 09:37:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 817962' 00:27:34.984 killing process with pid 817962 00:27:34.984 09:37:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 817962 00:27:34.984 09:37:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 817962 00:27:34.984 09:37:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:34.984 09:37:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:34.984 09:37:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:34.984 09:37:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:34.984 09:37:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:34.984 09:37:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:34.984 09:37:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:34.984 09:37:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:36.884 09:37:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:36.884 09:37:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:27:36.884 09:37:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:37.450 09:37:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:39.976 09:37:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:45.250 09:37:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:27:45.250 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:45.250 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:45.250 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:45.250 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:45.250 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:45.250 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:45.250 09:37:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:45.250 09:37:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:45.251 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:45.251 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:45.251 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:45.251 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:45.251 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:45.251 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:27:45.251 00:27:45.251 --- 10.0.0.2 ping statistics --- 00:27:45.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.251 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:27:45.251 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:45.251 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:45.251 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:27:45.251 00:27:45.251 --- 10.0.0.1 ping statistics --- 00:27:45.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.251 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:27:45.252 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:45.252 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:45.252 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:45.252 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:45.252 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:45.252 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:45.252 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:45.252 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:45.252 09:37:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:45.252 09:37:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:27:45.252 09:37:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:27:45.252 09:37:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:27:45.252 net.core.busy_poll = 1 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:27:45.252 net.core.busy_read = 1 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=820599 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 820599 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 820599 ']' 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:45.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:45.252 [2024-07-14 09:37:29.180036] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:27:45.252 [2024-07-14 09:37:29.180140] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:45.252 EAL: No free 2048 kB hugepages reported on node 1 00:27:45.252 [2024-07-14 09:37:29.246358] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:45.252 [2024-07-14 09:37:29.333325] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:45.252 [2024-07-14 09:37:29.333380] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:45.252 [2024-07-14 09:37:29.333408] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:45.252 [2024-07-14 09:37:29.333419] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:45.252 [2024-07-14 09:37:29.333435] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:45.252 [2024-07-14 09:37:29.333518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:45.252 [2024-07-14 09:37:29.333583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:45.252 [2024-07-14 09:37:29.333649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:45.252 [2024-07-14 09:37:29.333651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:45.252 [2024-07-14 09:37:29.574941] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:45.252 Malloc1 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:45.252 [2024-07-14 09:37:29.628508] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=820746 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:27:45.252 09:37:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:45.252 EAL: No free 2048 kB hugepages reported on node 1 00:27:47.779 09:37:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:27:47.779 09:37:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.779 09:37:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:47.779 09:37:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.779 09:37:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:27:47.779 "tick_rate": 2700000000, 00:27:47.779 "poll_groups": [ 00:27:47.779 { 00:27:47.779 "name": "nvmf_tgt_poll_group_000", 00:27:47.779 "admin_qpairs": 1, 00:27:47.779 "io_qpairs": 2, 00:27:47.779 "current_admin_qpairs": 1, 00:27:47.779 "current_io_qpairs": 2, 00:27:47.779 "pending_bdev_io": 0, 00:27:47.779 "completed_nvme_io": 25176, 00:27:47.779 "transports": [ 00:27:47.779 { 00:27:47.779 "trtype": "TCP" 00:27:47.779 } 00:27:47.779 ] 00:27:47.779 }, 00:27:47.779 { 00:27:47.779 "name": "nvmf_tgt_poll_group_001", 00:27:47.779 "admin_qpairs": 0, 00:27:47.779 "io_qpairs": 2, 00:27:47.779 "current_admin_qpairs": 0, 00:27:47.779 "current_io_qpairs": 2, 00:27:47.779 "pending_bdev_io": 0, 00:27:47.779 "completed_nvme_io": 25381, 00:27:47.779 "transports": [ 00:27:47.779 { 00:27:47.779 "trtype": "TCP" 00:27:47.779 } 00:27:47.779 ] 00:27:47.779 }, 00:27:47.779 { 00:27:47.779 "name": "nvmf_tgt_poll_group_002", 00:27:47.779 "admin_qpairs": 0, 00:27:47.779 "io_qpairs": 0, 00:27:47.779 "current_admin_qpairs": 0, 00:27:47.779 "current_io_qpairs": 0, 00:27:47.779 "pending_bdev_io": 0, 00:27:47.779 "completed_nvme_io": 0, 00:27:47.779 "transports": [ 00:27:47.779 { 00:27:47.779 "trtype": "TCP" 00:27:47.779 } 00:27:47.779 ] 00:27:47.779 }, 00:27:47.779 { 00:27:47.779 "name": "nvmf_tgt_poll_group_003", 00:27:47.779 "admin_qpairs": 0, 00:27:47.779 "io_qpairs": 0, 00:27:47.779 "current_admin_qpairs": 0, 00:27:47.779 "current_io_qpairs": 0, 00:27:47.779 "pending_bdev_io": 0, 00:27:47.779 "completed_nvme_io": 0, 00:27:47.779 "transports": [ 00:27:47.779 { 00:27:47.779 "trtype": "TCP" 00:27:47.779 } 00:27:47.779 ] 00:27:47.779 } 00:27:47.779 ] 00:27:47.779 }' 00:27:47.779 09:37:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:27:47.779 09:37:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:27:47.779 09:37:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:27:47.779 09:37:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:27:47.779 09:37:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 820746 00:27:55.884 Initializing NVMe Controllers 00:27:55.884 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:55.884 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:55.884 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:55.884 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:55.884 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:55.884 Initialization complete. Launching workers. 00:27:55.884 ======================================================== 00:27:55.884 Latency(us) 00:27:55.884 Device Information : IOPS MiB/s Average min max 00:27:55.884 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5899.60 23.05 10885.51 1769.76 56656.25 00:27:55.884 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7052.30 27.55 9079.80 1680.98 53547.28 00:27:55.884 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6392.00 24.97 10018.42 1859.50 53444.31 00:27:55.885 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7568.20 29.56 8480.66 1739.19 53453.58 00:27:55.885 ======================================================== 00:27:55.885 Total : 26912.09 105.13 9530.09 1680.98 56656.25 00:27:55.885 00:27:55.885 09:37:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:27:55.885 09:37:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:55.885 09:37:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:55.885 09:37:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:55.885 09:37:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:55.885 09:37:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:55.885 09:37:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:55.885 rmmod nvme_tcp 00:27:55.885 rmmod nvme_fabrics 00:27:55.885 rmmod nvme_keyring 00:27:55.885 09:37:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:55.885 09:37:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:55.885 09:37:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:55.885 09:37:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 820599 ']' 00:27:55.885 09:37:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 820599 00:27:55.885 09:37:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 820599 ']' 00:27:55.885 09:37:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 820599 00:27:55.885 09:37:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:27:55.885 09:37:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:55.885 09:37:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 820599 00:27:55.885 09:37:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:55.885 09:37:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:55.885 09:37:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 820599' 00:27:55.885 killing process with pid 820599 00:27:55.885 09:37:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 820599 00:27:55.885 09:37:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 820599 00:27:55.885 09:37:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:55.885 09:37:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:55.885 09:37:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:55.885 09:37:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:55.885 09:37:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:55.885 09:37:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:55.885 09:37:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:55.885 09:37:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:59.195 09:37:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:59.195 09:37:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:27:59.195 00:27:59.195 real 0m44.885s 00:27:59.195 user 2m29.903s 00:27:59.195 sys 0m12.790s 00:27:59.195 09:37:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:59.195 09:37:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:59.195 ************************************ 00:27:59.195 END TEST nvmf_perf_adq 00:27:59.195 ************************************ 00:27:59.195 09:37:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:59.195 09:37:43 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:59.195 09:37:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:59.195 09:37:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:59.195 09:37:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:59.195 ************************************ 00:27:59.195 START TEST nvmf_shutdown 00:27:59.195 ************************************ 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:59.195 * Looking for test storage... 00:27:59.195 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:59.195 ************************************ 00:27:59.195 START TEST nvmf_shutdown_tc1 00:27:59.195 ************************************ 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:59.195 09:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:01.100 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:01.100 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:01.100 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:01.100 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:01.100 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:01.100 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:28:01.100 00:28:01.100 --- 10.0.0.2 ping statistics --- 00:28:01.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:01.100 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:01.100 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:01.100 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:28:01.100 00:28:01.100 --- 10.0.0.1 ping statistics --- 00:28:01.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:01.100 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:01.100 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:01.101 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:01.101 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:01.101 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:01.101 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=824037 00:28:01.101 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:01.101 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 824037 00:28:01.101 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 824037 ']' 00:28:01.101 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:01.101 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:01.101 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:01.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:01.101 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:01.101 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:01.101 [2024-07-14 09:37:45.540738] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:28:01.101 [2024-07-14 09:37:45.540809] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:01.359 EAL: No free 2048 kB hugepages reported on node 1 00:28:01.359 [2024-07-14 09:37:45.604603] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:01.359 [2024-07-14 09:37:45.693202] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:01.359 [2024-07-14 09:37:45.693254] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:01.359 [2024-07-14 09:37:45.693277] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:01.359 [2024-07-14 09:37:45.693288] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:01.359 [2024-07-14 09:37:45.693298] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:01.359 [2024-07-14 09:37:45.693354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:01.359 [2024-07-14 09:37:45.693485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:01.359 [2024-07-14 09:37:45.693550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:01.359 [2024-07-14 09:37:45.693552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:01.617 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:01.617 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:28:01.618 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:01.618 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:01.618 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:01.618 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:01.618 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:01.618 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.618 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:01.618 [2024-07-14 09:37:45.853774] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:01.618 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.618 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:01.618 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:01.618 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:01.618 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:01.618 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:01.618 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:01.618 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:01.618 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:01.618 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:01.618 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:01.618 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:01.618 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:01.618 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:01.618 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:01.618 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:01.618 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:01.618 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:01.618 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:01.618 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:01.618 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:01.618 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:01.618 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:01.618 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:01.618 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:01.618 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:01.618 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:01.618 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.618 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:01.618 Malloc1 00:28:01.618 [2024-07-14 09:37:45.943598] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:01.618 Malloc2 00:28:01.618 Malloc3 00:28:01.618 Malloc4 00:28:01.876 Malloc5 00:28:01.876 Malloc6 00:28:01.876 Malloc7 00:28:01.876 Malloc8 00:28:01.876 Malloc9 00:28:02.135 Malloc10 00:28:02.135 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.135 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:02.135 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:02.135 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:02.135 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=824183 00:28:02.135 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 824183 /var/tmp/bdevperf.sock 00:28:02.135 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 824183 ']' 00:28:02.135 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:02.135 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:02.135 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:02.135 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:02.135 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:02.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:02.135 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:28:02.135 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:02.135 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:28:02.135 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:02.135 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:02.135 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:02.135 { 00:28:02.135 "params": { 00:28:02.135 "name": "Nvme$subsystem", 00:28:02.135 "trtype": "$TEST_TRANSPORT", 00:28:02.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:02.135 "adrfam": "ipv4", 00:28:02.135 "trsvcid": "$NVMF_PORT", 00:28:02.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:02.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:02.135 "hdgst": ${hdgst:-false}, 00:28:02.135 "ddgst": ${ddgst:-false} 00:28:02.135 }, 00:28:02.135 "method": "bdev_nvme_attach_controller" 00:28:02.135 } 00:28:02.135 EOF 00:28:02.135 )") 00:28:02.135 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:02.135 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:02.135 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:02.135 { 00:28:02.135 "params": { 00:28:02.135 "name": "Nvme$subsystem", 00:28:02.135 "trtype": "$TEST_TRANSPORT", 00:28:02.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:02.135 "adrfam": "ipv4", 00:28:02.135 "trsvcid": "$NVMF_PORT", 00:28:02.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:02.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:02.135 "hdgst": ${hdgst:-false}, 00:28:02.135 "ddgst": ${ddgst:-false} 00:28:02.135 }, 00:28:02.135 "method": "bdev_nvme_attach_controller" 00:28:02.135 } 00:28:02.135 EOF 00:28:02.135 )") 00:28:02.135 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:02.135 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:02.135 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:02.135 { 00:28:02.135 "params": { 00:28:02.135 "name": "Nvme$subsystem", 00:28:02.135 "trtype": "$TEST_TRANSPORT", 00:28:02.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:02.135 "adrfam": "ipv4", 00:28:02.135 "trsvcid": "$NVMF_PORT", 00:28:02.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:02.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:02.135 "hdgst": ${hdgst:-false}, 00:28:02.135 "ddgst": ${ddgst:-false} 00:28:02.135 }, 00:28:02.135 "method": "bdev_nvme_attach_controller" 00:28:02.135 } 00:28:02.135 EOF 00:28:02.135 )") 00:28:02.135 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:02.135 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:02.135 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:02.135 { 00:28:02.135 "params": { 00:28:02.135 "name": "Nvme$subsystem", 00:28:02.135 "trtype": "$TEST_TRANSPORT", 00:28:02.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:02.135 "adrfam": "ipv4", 00:28:02.135 "trsvcid": "$NVMF_PORT", 00:28:02.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:02.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:02.135 "hdgst": ${hdgst:-false}, 00:28:02.135 "ddgst": ${ddgst:-false} 00:28:02.135 }, 00:28:02.135 "method": "bdev_nvme_attach_controller" 00:28:02.135 } 00:28:02.135 EOF 00:28:02.135 )") 00:28:02.135 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:02.135 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:02.135 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:02.135 { 00:28:02.135 "params": { 00:28:02.135 "name": "Nvme$subsystem", 00:28:02.135 "trtype": "$TEST_TRANSPORT", 00:28:02.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:02.135 "adrfam": "ipv4", 00:28:02.135 "trsvcid": "$NVMF_PORT", 00:28:02.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:02.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:02.136 "hdgst": ${hdgst:-false}, 00:28:02.136 "ddgst": ${ddgst:-false} 00:28:02.136 }, 00:28:02.136 "method": "bdev_nvme_attach_controller" 00:28:02.136 } 00:28:02.136 EOF 00:28:02.136 )") 00:28:02.136 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:02.136 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:02.136 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:02.136 { 00:28:02.136 "params": { 00:28:02.136 "name": "Nvme$subsystem", 00:28:02.136 "trtype": "$TEST_TRANSPORT", 00:28:02.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:02.136 "adrfam": "ipv4", 00:28:02.136 "trsvcid": "$NVMF_PORT", 00:28:02.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:02.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:02.136 "hdgst": ${hdgst:-false}, 00:28:02.136 "ddgst": ${ddgst:-false} 00:28:02.136 }, 00:28:02.136 "method": "bdev_nvme_attach_controller" 00:28:02.136 } 00:28:02.136 EOF 00:28:02.136 )") 00:28:02.136 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:02.136 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:02.136 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:02.136 { 00:28:02.136 "params": { 00:28:02.136 "name": "Nvme$subsystem", 00:28:02.136 "trtype": "$TEST_TRANSPORT", 00:28:02.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:02.136 "adrfam": "ipv4", 00:28:02.136 "trsvcid": "$NVMF_PORT", 00:28:02.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:02.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:02.136 "hdgst": ${hdgst:-false}, 00:28:02.136 "ddgst": ${ddgst:-false} 00:28:02.136 }, 00:28:02.136 "method": "bdev_nvme_attach_controller" 00:28:02.136 } 00:28:02.136 EOF 00:28:02.136 )") 00:28:02.136 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:02.136 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:02.136 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:02.136 { 00:28:02.136 "params": { 00:28:02.136 "name": "Nvme$subsystem", 00:28:02.136 "trtype": "$TEST_TRANSPORT", 00:28:02.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:02.136 "adrfam": "ipv4", 00:28:02.136 "trsvcid": "$NVMF_PORT", 00:28:02.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:02.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:02.136 "hdgst": ${hdgst:-false}, 00:28:02.136 "ddgst": ${ddgst:-false} 00:28:02.136 }, 00:28:02.136 "method": "bdev_nvme_attach_controller" 00:28:02.136 } 00:28:02.136 EOF 00:28:02.136 )") 00:28:02.136 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:02.136 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:02.136 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:02.136 { 00:28:02.136 "params": { 00:28:02.136 "name": "Nvme$subsystem", 00:28:02.136 "trtype": "$TEST_TRANSPORT", 00:28:02.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:02.136 "adrfam": "ipv4", 00:28:02.136 "trsvcid": "$NVMF_PORT", 00:28:02.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:02.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:02.136 "hdgst": ${hdgst:-false}, 00:28:02.136 "ddgst": ${ddgst:-false} 00:28:02.136 }, 00:28:02.136 "method": "bdev_nvme_attach_controller" 00:28:02.136 } 00:28:02.136 EOF 00:28:02.136 )") 00:28:02.136 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:02.136 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:02.136 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:02.136 { 00:28:02.136 "params": { 00:28:02.136 "name": "Nvme$subsystem", 00:28:02.136 "trtype": "$TEST_TRANSPORT", 00:28:02.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:02.136 "adrfam": "ipv4", 00:28:02.136 "trsvcid": "$NVMF_PORT", 00:28:02.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:02.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:02.136 "hdgst": ${hdgst:-false}, 00:28:02.136 "ddgst": ${ddgst:-false} 00:28:02.136 }, 00:28:02.136 "method": "bdev_nvme_attach_controller" 00:28:02.136 } 00:28:02.136 EOF 00:28:02.136 )") 00:28:02.136 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:02.136 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:28:02.136 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:28:02.136 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:02.136 "params": { 00:28:02.136 "name": "Nvme1", 00:28:02.136 "trtype": "tcp", 00:28:02.136 "traddr": "10.0.0.2", 00:28:02.136 "adrfam": "ipv4", 00:28:02.136 "trsvcid": "4420", 00:28:02.136 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:02.136 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:02.136 "hdgst": false, 00:28:02.136 "ddgst": false 00:28:02.136 }, 00:28:02.136 "method": "bdev_nvme_attach_controller" 00:28:02.136 },{ 00:28:02.136 "params": { 00:28:02.136 "name": "Nvme2", 00:28:02.136 "trtype": "tcp", 00:28:02.136 "traddr": "10.0.0.2", 00:28:02.136 "adrfam": "ipv4", 00:28:02.136 "trsvcid": "4420", 00:28:02.136 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:02.136 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:02.136 "hdgst": false, 00:28:02.136 "ddgst": false 00:28:02.136 }, 00:28:02.136 "method": "bdev_nvme_attach_controller" 00:28:02.136 },{ 00:28:02.136 "params": { 00:28:02.136 "name": "Nvme3", 00:28:02.136 "trtype": "tcp", 00:28:02.136 "traddr": "10.0.0.2", 00:28:02.136 "adrfam": "ipv4", 00:28:02.136 "trsvcid": "4420", 00:28:02.136 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:02.136 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:02.136 "hdgst": false, 00:28:02.136 "ddgst": false 00:28:02.136 }, 00:28:02.136 "method": "bdev_nvme_attach_controller" 00:28:02.136 },{ 00:28:02.136 "params": { 00:28:02.136 "name": "Nvme4", 00:28:02.136 "trtype": "tcp", 00:28:02.136 "traddr": "10.0.0.2", 00:28:02.136 "adrfam": "ipv4", 00:28:02.136 "trsvcid": "4420", 00:28:02.136 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:02.136 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:02.136 "hdgst": false, 00:28:02.136 "ddgst": false 00:28:02.136 }, 00:28:02.136 "method": "bdev_nvme_attach_controller" 00:28:02.136 },{ 00:28:02.136 "params": { 00:28:02.136 "name": "Nvme5", 00:28:02.136 "trtype": "tcp", 00:28:02.136 "traddr": "10.0.0.2", 00:28:02.136 "adrfam": "ipv4", 00:28:02.136 "trsvcid": "4420", 00:28:02.136 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:02.136 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:02.136 "hdgst": false, 00:28:02.136 "ddgst": false 00:28:02.136 }, 00:28:02.136 "method": "bdev_nvme_attach_controller" 00:28:02.136 },{ 00:28:02.136 "params": { 00:28:02.136 "name": "Nvme6", 00:28:02.136 "trtype": "tcp", 00:28:02.136 "traddr": "10.0.0.2", 00:28:02.136 "adrfam": "ipv4", 00:28:02.136 "trsvcid": "4420", 00:28:02.136 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:02.136 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:02.136 "hdgst": false, 00:28:02.136 "ddgst": false 00:28:02.136 }, 00:28:02.136 "method": "bdev_nvme_attach_controller" 00:28:02.136 },{ 00:28:02.136 "params": { 00:28:02.136 "name": "Nvme7", 00:28:02.136 "trtype": "tcp", 00:28:02.136 "traddr": "10.0.0.2", 00:28:02.136 "adrfam": "ipv4", 00:28:02.136 "trsvcid": "4420", 00:28:02.136 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:02.136 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:02.136 "hdgst": false, 00:28:02.136 "ddgst": false 00:28:02.136 }, 00:28:02.136 "method": "bdev_nvme_attach_controller" 00:28:02.136 },{ 00:28:02.136 "params": { 00:28:02.136 "name": "Nvme8", 00:28:02.136 "trtype": "tcp", 00:28:02.136 "traddr": "10.0.0.2", 00:28:02.136 "adrfam": "ipv4", 00:28:02.136 "trsvcid": "4420", 00:28:02.136 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:02.136 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:02.136 "hdgst": false, 00:28:02.136 "ddgst": false 00:28:02.136 }, 00:28:02.136 "method": "bdev_nvme_attach_controller" 00:28:02.136 },{ 00:28:02.136 "params": { 00:28:02.136 "name": "Nvme9", 00:28:02.136 "trtype": "tcp", 00:28:02.136 "traddr": "10.0.0.2", 00:28:02.136 "adrfam": "ipv4", 00:28:02.136 "trsvcid": "4420", 00:28:02.136 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:02.136 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:02.136 "hdgst": false, 00:28:02.136 "ddgst": false 00:28:02.136 }, 00:28:02.136 "method": "bdev_nvme_attach_controller" 00:28:02.136 },{ 00:28:02.136 "params": { 00:28:02.136 "name": "Nvme10", 00:28:02.136 "trtype": "tcp", 00:28:02.136 "traddr": "10.0.0.2", 00:28:02.136 "adrfam": "ipv4", 00:28:02.136 "trsvcid": "4420", 00:28:02.136 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:02.136 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:02.136 "hdgst": false, 00:28:02.136 "ddgst": false 00:28:02.136 }, 00:28:02.136 "method": "bdev_nvme_attach_controller" 00:28:02.136 }' 00:28:02.136 [2024-07-14 09:37:46.458187] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:28:02.136 [2024-07-14 09:37:46.458288] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:02.136 EAL: No free 2048 kB hugepages reported on node 1 00:28:02.137 [2024-07-14 09:37:46.521601] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:02.395 [2024-07-14 09:37:46.608953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.299 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:04.299 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:28:04.299 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:04.299 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.299 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:04.299 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.299 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 824183 00:28:04.299 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:28:04.299 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:28:05.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 824183 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:05.234 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 824037 00:28:05.234 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:05.234 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:05.234 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:28:05.234 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:28:05.234 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.234 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.234 { 00:28:05.234 "params": { 00:28:05.234 "name": "Nvme$subsystem", 00:28:05.234 "trtype": "$TEST_TRANSPORT", 00:28:05.234 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.234 "adrfam": "ipv4", 00:28:05.234 "trsvcid": "$NVMF_PORT", 00:28:05.234 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.234 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.234 "hdgst": ${hdgst:-false}, 00:28:05.234 "ddgst": ${ddgst:-false} 00:28:05.234 }, 00:28:05.234 "method": "bdev_nvme_attach_controller" 00:28:05.234 } 00:28:05.234 EOF 00:28:05.234 )") 00:28:05.234 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:05.234 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.234 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.234 { 00:28:05.234 "params": { 00:28:05.234 "name": "Nvme$subsystem", 00:28:05.234 "trtype": "$TEST_TRANSPORT", 00:28:05.234 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.234 "adrfam": "ipv4", 00:28:05.234 "trsvcid": "$NVMF_PORT", 00:28:05.234 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.234 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.234 "hdgst": ${hdgst:-false}, 00:28:05.234 "ddgst": ${ddgst:-false} 00:28:05.234 }, 00:28:05.234 "method": "bdev_nvme_attach_controller" 00:28:05.234 } 00:28:05.234 EOF 00:28:05.234 )") 00:28:05.234 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:05.234 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.234 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.234 { 00:28:05.234 "params": { 00:28:05.234 "name": "Nvme$subsystem", 00:28:05.234 "trtype": "$TEST_TRANSPORT", 00:28:05.234 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.234 "adrfam": "ipv4", 00:28:05.234 "trsvcid": "$NVMF_PORT", 00:28:05.234 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.234 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.234 "hdgst": ${hdgst:-false}, 00:28:05.234 "ddgst": ${ddgst:-false} 00:28:05.234 }, 00:28:05.234 "method": "bdev_nvme_attach_controller" 00:28:05.234 } 00:28:05.234 EOF 00:28:05.234 )") 00:28:05.234 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:05.234 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.234 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.234 { 00:28:05.234 "params": { 00:28:05.234 "name": "Nvme$subsystem", 00:28:05.234 "trtype": "$TEST_TRANSPORT", 00:28:05.234 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.234 "adrfam": "ipv4", 00:28:05.234 "trsvcid": "$NVMF_PORT", 00:28:05.234 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.234 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.234 "hdgst": ${hdgst:-false}, 00:28:05.234 "ddgst": ${ddgst:-false} 00:28:05.234 }, 00:28:05.234 "method": "bdev_nvme_attach_controller" 00:28:05.234 } 00:28:05.234 EOF 00:28:05.234 )") 00:28:05.234 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:05.234 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.234 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.234 { 00:28:05.234 "params": { 00:28:05.234 "name": "Nvme$subsystem", 00:28:05.234 "trtype": "$TEST_TRANSPORT", 00:28:05.234 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.234 "adrfam": "ipv4", 00:28:05.234 "trsvcid": "$NVMF_PORT", 00:28:05.234 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.234 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.234 "hdgst": ${hdgst:-false}, 00:28:05.235 "ddgst": ${ddgst:-false} 00:28:05.235 }, 00:28:05.235 "method": "bdev_nvme_attach_controller" 00:28:05.235 } 00:28:05.235 EOF 00:28:05.235 )") 00:28:05.235 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:05.235 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.235 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.235 { 00:28:05.235 "params": { 00:28:05.235 "name": "Nvme$subsystem", 00:28:05.235 "trtype": "$TEST_TRANSPORT", 00:28:05.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.235 "adrfam": "ipv4", 00:28:05.235 "trsvcid": "$NVMF_PORT", 00:28:05.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.235 "hdgst": ${hdgst:-false}, 00:28:05.235 "ddgst": ${ddgst:-false} 00:28:05.235 }, 00:28:05.235 "method": "bdev_nvme_attach_controller" 00:28:05.235 } 00:28:05.235 EOF 00:28:05.235 )") 00:28:05.235 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:05.235 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.235 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.235 { 00:28:05.235 "params": { 00:28:05.235 "name": "Nvme$subsystem", 00:28:05.235 "trtype": "$TEST_TRANSPORT", 00:28:05.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.235 "adrfam": "ipv4", 00:28:05.235 "trsvcid": "$NVMF_PORT", 00:28:05.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.235 "hdgst": ${hdgst:-false}, 00:28:05.235 "ddgst": ${ddgst:-false} 00:28:05.235 }, 00:28:05.235 "method": "bdev_nvme_attach_controller" 00:28:05.235 } 00:28:05.235 EOF 00:28:05.235 )") 00:28:05.235 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:05.235 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.235 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.235 { 00:28:05.235 "params": { 00:28:05.235 "name": "Nvme$subsystem", 00:28:05.235 "trtype": "$TEST_TRANSPORT", 00:28:05.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.235 "adrfam": "ipv4", 00:28:05.235 "trsvcid": "$NVMF_PORT", 00:28:05.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.235 "hdgst": ${hdgst:-false}, 00:28:05.235 "ddgst": ${ddgst:-false} 00:28:05.235 }, 00:28:05.235 "method": "bdev_nvme_attach_controller" 00:28:05.235 } 00:28:05.235 EOF 00:28:05.235 )") 00:28:05.235 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:05.235 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.235 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.235 { 00:28:05.235 "params": { 00:28:05.235 "name": "Nvme$subsystem", 00:28:05.235 "trtype": "$TEST_TRANSPORT", 00:28:05.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.235 "adrfam": "ipv4", 00:28:05.235 "trsvcid": "$NVMF_PORT", 00:28:05.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.235 "hdgst": ${hdgst:-false}, 00:28:05.235 "ddgst": ${ddgst:-false} 00:28:05.235 }, 00:28:05.235 "method": "bdev_nvme_attach_controller" 00:28:05.235 } 00:28:05.235 EOF 00:28:05.235 )") 00:28:05.235 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:05.235 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.235 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.235 { 00:28:05.235 "params": { 00:28:05.235 "name": "Nvme$subsystem", 00:28:05.235 "trtype": "$TEST_TRANSPORT", 00:28:05.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.235 "adrfam": "ipv4", 00:28:05.235 "trsvcid": "$NVMF_PORT", 00:28:05.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.235 "hdgst": ${hdgst:-false}, 00:28:05.235 "ddgst": ${ddgst:-false} 00:28:05.235 }, 00:28:05.235 "method": "bdev_nvme_attach_controller" 00:28:05.235 } 00:28:05.235 EOF 00:28:05.235 )") 00:28:05.235 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:05.235 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:28:05.235 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:28:05.235 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:05.235 "params": { 00:28:05.235 "name": "Nvme1", 00:28:05.235 "trtype": "tcp", 00:28:05.235 "traddr": "10.0.0.2", 00:28:05.235 "adrfam": "ipv4", 00:28:05.235 "trsvcid": "4420", 00:28:05.235 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:05.235 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:05.235 "hdgst": false, 00:28:05.235 "ddgst": false 00:28:05.235 }, 00:28:05.235 "method": "bdev_nvme_attach_controller" 00:28:05.235 },{ 00:28:05.235 "params": { 00:28:05.235 "name": "Nvme2", 00:28:05.235 "trtype": "tcp", 00:28:05.235 "traddr": "10.0.0.2", 00:28:05.235 "adrfam": "ipv4", 00:28:05.235 "trsvcid": "4420", 00:28:05.235 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:05.235 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:05.235 "hdgst": false, 00:28:05.235 "ddgst": false 00:28:05.235 }, 00:28:05.235 "method": "bdev_nvme_attach_controller" 00:28:05.235 },{ 00:28:05.235 "params": { 00:28:05.235 "name": "Nvme3", 00:28:05.235 "trtype": "tcp", 00:28:05.235 "traddr": "10.0.0.2", 00:28:05.235 "adrfam": "ipv4", 00:28:05.235 "trsvcid": "4420", 00:28:05.235 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:05.235 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:05.235 "hdgst": false, 00:28:05.235 "ddgst": false 00:28:05.235 }, 00:28:05.235 "method": "bdev_nvme_attach_controller" 00:28:05.235 },{ 00:28:05.235 "params": { 00:28:05.235 "name": "Nvme4", 00:28:05.235 "trtype": "tcp", 00:28:05.235 "traddr": "10.0.0.2", 00:28:05.235 "adrfam": "ipv4", 00:28:05.235 "trsvcid": "4420", 00:28:05.235 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:05.235 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:05.235 "hdgst": false, 00:28:05.235 "ddgst": false 00:28:05.235 }, 00:28:05.235 "method": "bdev_nvme_attach_controller" 00:28:05.235 },{ 00:28:05.235 "params": { 00:28:05.235 "name": "Nvme5", 00:28:05.235 "trtype": "tcp", 00:28:05.235 "traddr": "10.0.0.2", 00:28:05.235 "adrfam": "ipv4", 00:28:05.235 "trsvcid": "4420", 00:28:05.235 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:05.235 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:05.235 "hdgst": false, 00:28:05.235 "ddgst": false 00:28:05.235 }, 00:28:05.235 "method": "bdev_nvme_attach_controller" 00:28:05.235 },{ 00:28:05.235 "params": { 00:28:05.235 "name": "Nvme6", 00:28:05.235 "trtype": "tcp", 00:28:05.235 "traddr": "10.0.0.2", 00:28:05.235 "adrfam": "ipv4", 00:28:05.235 "trsvcid": "4420", 00:28:05.235 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:05.235 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:05.235 "hdgst": false, 00:28:05.235 "ddgst": false 00:28:05.235 }, 00:28:05.235 "method": "bdev_nvme_attach_controller" 00:28:05.235 },{ 00:28:05.235 "params": { 00:28:05.235 "name": "Nvme7", 00:28:05.235 "trtype": "tcp", 00:28:05.235 "traddr": "10.0.0.2", 00:28:05.235 "adrfam": "ipv4", 00:28:05.235 "trsvcid": "4420", 00:28:05.235 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:05.235 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:05.235 "hdgst": false, 00:28:05.235 "ddgst": false 00:28:05.235 }, 00:28:05.235 "method": "bdev_nvme_attach_controller" 00:28:05.235 },{ 00:28:05.235 "params": { 00:28:05.235 "name": "Nvme8", 00:28:05.235 "trtype": "tcp", 00:28:05.235 "traddr": "10.0.0.2", 00:28:05.235 "adrfam": "ipv4", 00:28:05.235 "trsvcid": "4420", 00:28:05.235 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:05.235 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:05.235 "hdgst": false, 00:28:05.235 "ddgst": false 00:28:05.235 }, 00:28:05.235 "method": "bdev_nvme_attach_controller" 00:28:05.235 },{ 00:28:05.235 "params": { 00:28:05.235 "name": "Nvme9", 00:28:05.235 "trtype": "tcp", 00:28:05.235 "traddr": "10.0.0.2", 00:28:05.235 "adrfam": "ipv4", 00:28:05.235 "trsvcid": "4420", 00:28:05.235 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:05.235 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:05.235 "hdgst": false, 00:28:05.235 "ddgst": false 00:28:05.235 }, 00:28:05.235 "method": "bdev_nvme_attach_controller" 00:28:05.235 },{ 00:28:05.235 "params": { 00:28:05.235 "name": "Nvme10", 00:28:05.235 "trtype": "tcp", 00:28:05.235 "traddr": "10.0.0.2", 00:28:05.235 "adrfam": "ipv4", 00:28:05.235 "trsvcid": "4420", 00:28:05.235 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:05.235 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:05.235 "hdgst": false, 00:28:05.235 "ddgst": false 00:28:05.235 }, 00:28:05.235 "method": "bdev_nvme_attach_controller" 00:28:05.235 }' 00:28:05.235 [2024-07-14 09:37:49.501697] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:28:05.235 [2024-07-14 09:37:49.501795] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid824516 ] 00:28:05.235 EAL: No free 2048 kB hugepages reported on node 1 00:28:05.235 [2024-07-14 09:37:49.568207] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.235 [2024-07-14 09:37:49.655763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:07.132 Running I/O for 1 seconds... 00:28:08.067 00:28:08.067 Latency(us) 00:28:08.067 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:08.067 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:08.067 Verification LBA range: start 0x0 length 0x400 00:28:08.067 Nvme1n1 : 1.09 235.90 14.74 0.00 0.00 268572.44 20388.98 250104.79 00:28:08.067 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:08.067 Verification LBA range: start 0x0 length 0x400 00:28:08.067 Nvme2n1 : 1.08 177.60 11.10 0.00 0.00 350374.87 23010.42 309135.74 00:28:08.067 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:08.067 Verification LBA range: start 0x0 length 0x400 00:28:08.067 Nvme3n1 : 1.16 220.43 13.78 0.00 0.00 278304.24 22816.24 274959.93 00:28:08.067 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:08.067 Verification LBA range: start 0x0 length 0x400 00:28:08.067 Nvme4n1 : 1.08 238.12 14.88 0.00 0.00 252082.82 19418.07 246997.90 00:28:08.067 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:08.067 Verification LBA range: start 0x0 length 0x400 00:28:08.067 Nvme5n1 : 1.14 224.97 14.06 0.00 0.00 263490.94 22330.79 250104.79 00:28:08.067 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:08.067 Verification LBA range: start 0x0 length 0x400 00:28:08.067 Nvme6n1 : 1.18 271.97 17.00 0.00 0.00 214451.84 18835.53 254765.13 00:28:08.067 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:08.067 Verification LBA range: start 0x0 length 0x400 00:28:08.067 Nvme7n1 : 1.17 273.86 17.12 0.00 0.00 209180.60 19612.25 237677.23 00:28:08.067 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:08.067 Verification LBA range: start 0x0 length 0x400 00:28:08.067 Nvme8n1 : 1.18 270.36 16.90 0.00 0.00 209196.83 19223.89 250104.79 00:28:08.067 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:08.067 Verification LBA range: start 0x0 length 0x400 00:28:08.067 Nvme9n1 : 1.16 228.89 14.31 0.00 0.00 240762.09 3228.25 251658.24 00:28:08.067 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:08.067 Verification LBA range: start 0x0 length 0x400 00:28:08.067 Nvme10n1 : 1.17 219.29 13.71 0.00 0.00 248906.90 22427.88 262532.36 00:28:08.067 =================================================================================================================== 00:28:08.067 Total : 2361.39 147.59 0.00 0.00 248159.72 3228.25 309135.74 00:28:08.326 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:28:08.326 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:08.326 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:08.326 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:08.326 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:08.326 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:08.326 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:28:08.326 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:08.326 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:28:08.326 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:08.326 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:08.326 rmmod nvme_tcp 00:28:08.326 rmmod nvme_fabrics 00:28:08.326 rmmod nvme_keyring 00:28:08.326 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:08.326 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:28:08.326 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:28:08.326 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 824037 ']' 00:28:08.326 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 824037 00:28:08.326 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 824037 ']' 00:28:08.326 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 824037 00:28:08.326 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:28:08.326 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:08.326 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 824037 00:28:08.326 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:08.326 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:08.326 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 824037' 00:28:08.326 killing process with pid 824037 00:28:08.326 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 824037 00:28:08.326 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 824037 00:28:08.893 09:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:08.893 09:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:08.893 09:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:08.893 09:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:08.893 09:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:08.893 09:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:08.893 09:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:08.893 09:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:10.797 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:10.797 00:28:10.797 real 0m11.862s 00:28:10.797 user 0m34.356s 00:28:10.797 sys 0m3.285s 00:28:10.797 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:10.797 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:10.797 ************************************ 00:28:10.797 END TEST nvmf_shutdown_tc1 00:28:10.797 ************************************ 00:28:10.797 09:37:55 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:28:10.797 09:37:55 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:10.797 09:37:55 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:10.797 09:37:55 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:10.797 09:37:55 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:10.797 ************************************ 00:28:10.797 START TEST nvmf_shutdown_tc2 00:28:10.797 ************************************ 00:28:10.797 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:28:10.797 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:28:10.797 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:10.797 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:10.797 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:10.797 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:10.797 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:10.797 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:10.797 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:10.797 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:10.797 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:10.797 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:10.797 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:10.797 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:10.797 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:10.797 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:10.797 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:10.797 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:10.797 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:10.797 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:10.797 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:10.797 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:10.797 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:28:10.797 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:10.797 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:28:10.797 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:28:10.797 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:28:10.797 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:28:10.797 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:28:10.797 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:10.797 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:10.797 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:10.797 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:10.797 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:10.797 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:10.797 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:10.797 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:10.797 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:10.797 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:10.797 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:10.798 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:10.798 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:10.798 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:10.798 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:10.798 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:10.798 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:10.798 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:10.798 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:10.798 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:10.798 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:10.798 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:10.798 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:10.798 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:10.798 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:10.798 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:10.798 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:10.798 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:10.798 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:10.798 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:10.798 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:10.798 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:10.798 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:10.798 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:10.798 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:10.798 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:10.798 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:10.798 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:10.798 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:10.798 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:10.798 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:10.798 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:10.798 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:10.798 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:10.798 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:10.798 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:10.798 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:10.798 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:10.798 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:10.798 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:10.798 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:10.798 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:10.798 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:10.798 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:10.798 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:10.798 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:11.056 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:11.056 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:11.056 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:11.056 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:11.056 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:11.056 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:11.056 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:11.056 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:11.056 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:11.056 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:11.056 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:11.056 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:11.056 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:11.056 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:11.056 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:11.056 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:11.056 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:11.056 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:11.056 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:11.056 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:11.056 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:11.056 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:11.056 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:11.056 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:11.056 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:11.056 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:11.056 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:11.056 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:28:11.056 00:28:11.056 --- 10.0.0.2 ping statistics --- 00:28:11.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:11.056 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:28:11.056 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:11.056 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:11.056 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:28:11.056 00:28:11.056 --- 10.0.0.1 ping statistics --- 00:28:11.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:11.056 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:28:11.056 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:11.056 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:28:11.056 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:11.056 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:11.056 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:11.056 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:11.056 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:11.056 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:11.056 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:11.056 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:11.056 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:11.056 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:11.056 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:11.056 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=825306 00:28:11.056 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:11.056 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 825306 00:28:11.056 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 825306 ']' 00:28:11.056 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:11.056 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:11.056 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:11.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:11.056 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:11.056 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:11.056 [2024-07-14 09:37:55.463561] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:28:11.056 [2024-07-14 09:37:55.463630] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:11.056 EAL: No free 2048 kB hugepages reported on node 1 00:28:11.313 [2024-07-14 09:37:55.531194] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:11.313 [2024-07-14 09:37:55.615775] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:11.313 [2024-07-14 09:37:55.615827] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:11.314 [2024-07-14 09:37:55.615871] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:11.314 [2024-07-14 09:37:55.615885] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:11.314 [2024-07-14 09:37:55.615895] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:11.314 [2024-07-14 09:37:55.615986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:11.314 [2024-07-14 09:37:55.616051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:11.314 [2024-07-14 09:37:55.616101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:11.314 [2024-07-14 09:37:55.616105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:11.314 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:11.314 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:28:11.314 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:11.314 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:11.314 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:11.314 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:11.314 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:11.314 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.314 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:11.571 [2024-07-14 09:37:55.766819] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:11.571 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.572 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:11.572 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:11.572 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:11.572 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:11.572 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:11.572 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:11.572 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:11.572 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:11.572 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:11.572 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:11.572 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:11.572 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:11.572 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:11.572 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:11.572 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:11.572 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:11.572 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:11.572 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:11.572 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:11.572 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:11.572 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:11.572 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:11.572 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:11.572 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:11.572 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:11.572 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:11.572 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.572 09:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:11.572 Malloc1 00:28:11.572 [2024-07-14 09:37:55.856475] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:11.572 Malloc2 00:28:11.572 Malloc3 00:28:11.572 Malloc4 00:28:11.830 Malloc5 00:28:11.830 Malloc6 00:28:11.830 Malloc7 00:28:11.830 Malloc8 00:28:11.830 Malloc9 00:28:12.088 Malloc10 00:28:12.088 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.088 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:12.088 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:12.088 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:12.088 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=825452 00:28:12.088 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 825452 /var/tmp/bdevperf.sock 00:28:12.088 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 825452 ']' 00:28:12.088 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:12.088 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:12.088 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:12.089 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:12.089 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:28:12.089 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:12.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:12.089 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:28:12.089 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:12.089 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.089 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:12.089 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.089 { 00:28:12.089 "params": { 00:28:12.089 "name": "Nvme$subsystem", 00:28:12.089 "trtype": "$TEST_TRANSPORT", 00:28:12.089 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.089 "adrfam": "ipv4", 00:28:12.089 "trsvcid": "$NVMF_PORT", 00:28:12.089 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.089 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.089 "hdgst": ${hdgst:-false}, 00:28:12.089 "ddgst": ${ddgst:-false} 00:28:12.089 }, 00:28:12.089 "method": "bdev_nvme_attach_controller" 00:28:12.089 } 00:28:12.089 EOF 00:28:12.089 )") 00:28:12.089 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:12.089 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.089 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.089 { 00:28:12.089 "params": { 00:28:12.089 "name": "Nvme$subsystem", 00:28:12.089 "trtype": "$TEST_TRANSPORT", 00:28:12.089 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.089 "adrfam": "ipv4", 00:28:12.089 "trsvcid": "$NVMF_PORT", 00:28:12.089 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.089 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.089 "hdgst": ${hdgst:-false}, 00:28:12.089 "ddgst": ${ddgst:-false} 00:28:12.089 }, 00:28:12.089 "method": "bdev_nvme_attach_controller" 00:28:12.089 } 00:28:12.089 EOF 00:28:12.089 )") 00:28:12.089 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:12.089 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.089 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.089 { 00:28:12.089 "params": { 00:28:12.089 "name": "Nvme$subsystem", 00:28:12.089 "trtype": "$TEST_TRANSPORT", 00:28:12.089 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.089 "adrfam": "ipv4", 00:28:12.089 "trsvcid": "$NVMF_PORT", 00:28:12.089 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.089 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.089 "hdgst": ${hdgst:-false}, 00:28:12.089 "ddgst": ${ddgst:-false} 00:28:12.089 }, 00:28:12.089 "method": "bdev_nvme_attach_controller" 00:28:12.089 } 00:28:12.089 EOF 00:28:12.089 )") 00:28:12.089 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:12.089 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.089 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.089 { 00:28:12.089 "params": { 00:28:12.089 "name": "Nvme$subsystem", 00:28:12.089 "trtype": "$TEST_TRANSPORT", 00:28:12.089 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.089 "adrfam": "ipv4", 00:28:12.089 "trsvcid": "$NVMF_PORT", 00:28:12.089 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.089 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.089 "hdgst": ${hdgst:-false}, 00:28:12.089 "ddgst": ${ddgst:-false} 00:28:12.089 }, 00:28:12.089 "method": "bdev_nvme_attach_controller" 00:28:12.089 } 00:28:12.089 EOF 00:28:12.089 )") 00:28:12.089 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:12.089 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.089 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.089 { 00:28:12.089 "params": { 00:28:12.089 "name": "Nvme$subsystem", 00:28:12.089 "trtype": "$TEST_TRANSPORT", 00:28:12.089 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.089 "adrfam": "ipv4", 00:28:12.089 "trsvcid": "$NVMF_PORT", 00:28:12.089 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.089 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.089 "hdgst": ${hdgst:-false}, 00:28:12.089 "ddgst": ${ddgst:-false} 00:28:12.089 }, 00:28:12.089 "method": "bdev_nvme_attach_controller" 00:28:12.089 } 00:28:12.089 EOF 00:28:12.089 )") 00:28:12.089 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:12.089 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.089 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.089 { 00:28:12.089 "params": { 00:28:12.089 "name": "Nvme$subsystem", 00:28:12.089 "trtype": "$TEST_TRANSPORT", 00:28:12.089 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.089 "adrfam": "ipv4", 00:28:12.089 "trsvcid": "$NVMF_PORT", 00:28:12.089 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.089 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.089 "hdgst": ${hdgst:-false}, 00:28:12.089 "ddgst": ${ddgst:-false} 00:28:12.089 }, 00:28:12.089 "method": "bdev_nvme_attach_controller" 00:28:12.089 } 00:28:12.089 EOF 00:28:12.089 )") 00:28:12.089 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:12.089 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.089 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.089 { 00:28:12.089 "params": { 00:28:12.089 "name": "Nvme$subsystem", 00:28:12.089 "trtype": "$TEST_TRANSPORT", 00:28:12.089 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.089 "adrfam": "ipv4", 00:28:12.089 "trsvcid": "$NVMF_PORT", 00:28:12.089 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.089 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.089 "hdgst": ${hdgst:-false}, 00:28:12.089 "ddgst": ${ddgst:-false} 00:28:12.089 }, 00:28:12.089 "method": "bdev_nvme_attach_controller" 00:28:12.089 } 00:28:12.089 EOF 00:28:12.089 )") 00:28:12.089 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:12.089 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.089 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.089 { 00:28:12.089 "params": { 00:28:12.089 "name": "Nvme$subsystem", 00:28:12.089 "trtype": "$TEST_TRANSPORT", 00:28:12.089 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.089 "adrfam": "ipv4", 00:28:12.089 "trsvcid": "$NVMF_PORT", 00:28:12.089 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.089 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.089 "hdgst": ${hdgst:-false}, 00:28:12.089 "ddgst": ${ddgst:-false} 00:28:12.089 }, 00:28:12.089 "method": "bdev_nvme_attach_controller" 00:28:12.089 } 00:28:12.089 EOF 00:28:12.089 )") 00:28:12.089 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:12.089 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.089 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.089 { 00:28:12.089 "params": { 00:28:12.089 "name": "Nvme$subsystem", 00:28:12.089 "trtype": "$TEST_TRANSPORT", 00:28:12.089 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.089 "adrfam": "ipv4", 00:28:12.089 "trsvcid": "$NVMF_PORT", 00:28:12.089 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.089 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.089 "hdgst": ${hdgst:-false}, 00:28:12.089 "ddgst": ${ddgst:-false} 00:28:12.089 }, 00:28:12.089 "method": "bdev_nvme_attach_controller" 00:28:12.089 } 00:28:12.089 EOF 00:28:12.089 )") 00:28:12.089 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:12.089 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:12.089 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:12.089 { 00:28:12.089 "params": { 00:28:12.089 "name": "Nvme$subsystem", 00:28:12.089 "trtype": "$TEST_TRANSPORT", 00:28:12.089 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.089 "adrfam": "ipv4", 00:28:12.089 "trsvcid": "$NVMF_PORT", 00:28:12.089 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.089 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.089 "hdgst": ${hdgst:-false}, 00:28:12.089 "ddgst": ${ddgst:-false} 00:28:12.089 }, 00:28:12.089 "method": "bdev_nvme_attach_controller" 00:28:12.089 } 00:28:12.089 EOF 00:28:12.089 )") 00:28:12.089 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:12.089 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:28:12.089 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:28:12.089 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:12.089 "params": { 00:28:12.089 "name": "Nvme1", 00:28:12.089 "trtype": "tcp", 00:28:12.089 "traddr": "10.0.0.2", 00:28:12.089 "adrfam": "ipv4", 00:28:12.089 "trsvcid": "4420", 00:28:12.089 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:12.089 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:12.089 "hdgst": false, 00:28:12.089 "ddgst": false 00:28:12.089 }, 00:28:12.090 "method": "bdev_nvme_attach_controller" 00:28:12.090 },{ 00:28:12.090 "params": { 00:28:12.090 "name": "Nvme2", 00:28:12.090 "trtype": "tcp", 00:28:12.090 "traddr": "10.0.0.2", 00:28:12.090 "adrfam": "ipv4", 00:28:12.090 "trsvcid": "4420", 00:28:12.090 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:12.090 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:12.090 "hdgst": false, 00:28:12.090 "ddgst": false 00:28:12.090 }, 00:28:12.090 "method": "bdev_nvme_attach_controller" 00:28:12.090 },{ 00:28:12.090 "params": { 00:28:12.090 "name": "Nvme3", 00:28:12.090 "trtype": "tcp", 00:28:12.090 "traddr": "10.0.0.2", 00:28:12.090 "adrfam": "ipv4", 00:28:12.090 "trsvcid": "4420", 00:28:12.090 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:12.090 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:12.090 "hdgst": false, 00:28:12.090 "ddgst": false 00:28:12.090 }, 00:28:12.090 "method": "bdev_nvme_attach_controller" 00:28:12.090 },{ 00:28:12.090 "params": { 00:28:12.090 "name": "Nvme4", 00:28:12.090 "trtype": "tcp", 00:28:12.090 "traddr": "10.0.0.2", 00:28:12.090 "adrfam": "ipv4", 00:28:12.090 "trsvcid": "4420", 00:28:12.090 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:12.090 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:12.090 "hdgst": false, 00:28:12.090 "ddgst": false 00:28:12.090 }, 00:28:12.090 "method": "bdev_nvme_attach_controller" 00:28:12.090 },{ 00:28:12.090 "params": { 00:28:12.090 "name": "Nvme5", 00:28:12.090 "trtype": "tcp", 00:28:12.090 "traddr": "10.0.0.2", 00:28:12.090 "adrfam": "ipv4", 00:28:12.090 "trsvcid": "4420", 00:28:12.090 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:12.090 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:12.090 "hdgst": false, 00:28:12.090 "ddgst": false 00:28:12.090 }, 00:28:12.090 "method": "bdev_nvme_attach_controller" 00:28:12.090 },{ 00:28:12.090 "params": { 00:28:12.090 "name": "Nvme6", 00:28:12.090 "trtype": "tcp", 00:28:12.090 "traddr": "10.0.0.2", 00:28:12.090 "adrfam": "ipv4", 00:28:12.090 "trsvcid": "4420", 00:28:12.090 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:12.090 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:12.090 "hdgst": false, 00:28:12.090 "ddgst": false 00:28:12.090 }, 00:28:12.090 "method": "bdev_nvme_attach_controller" 00:28:12.090 },{ 00:28:12.090 "params": { 00:28:12.090 "name": "Nvme7", 00:28:12.090 "trtype": "tcp", 00:28:12.090 "traddr": "10.0.0.2", 00:28:12.090 "adrfam": "ipv4", 00:28:12.090 "trsvcid": "4420", 00:28:12.090 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:12.090 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:12.090 "hdgst": false, 00:28:12.090 "ddgst": false 00:28:12.090 }, 00:28:12.090 "method": "bdev_nvme_attach_controller" 00:28:12.090 },{ 00:28:12.090 "params": { 00:28:12.090 "name": "Nvme8", 00:28:12.090 "trtype": "tcp", 00:28:12.090 "traddr": "10.0.0.2", 00:28:12.090 "adrfam": "ipv4", 00:28:12.090 "trsvcid": "4420", 00:28:12.090 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:12.090 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:12.090 "hdgst": false, 00:28:12.090 "ddgst": false 00:28:12.090 }, 00:28:12.090 "method": "bdev_nvme_attach_controller" 00:28:12.090 },{ 00:28:12.090 "params": { 00:28:12.090 "name": "Nvme9", 00:28:12.090 "trtype": "tcp", 00:28:12.090 "traddr": "10.0.0.2", 00:28:12.090 "adrfam": "ipv4", 00:28:12.090 "trsvcid": "4420", 00:28:12.090 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:12.090 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:12.090 "hdgst": false, 00:28:12.090 "ddgst": false 00:28:12.090 }, 00:28:12.090 "method": "bdev_nvme_attach_controller" 00:28:12.090 },{ 00:28:12.090 "params": { 00:28:12.090 "name": "Nvme10", 00:28:12.090 "trtype": "tcp", 00:28:12.090 "traddr": "10.0.0.2", 00:28:12.090 "adrfam": "ipv4", 00:28:12.090 "trsvcid": "4420", 00:28:12.090 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:12.090 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:12.090 "hdgst": false, 00:28:12.090 "ddgst": false 00:28:12.090 }, 00:28:12.090 "method": "bdev_nvme_attach_controller" 00:28:12.090 }' 00:28:12.090 [2024-07-14 09:37:56.384205] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:28:12.090 [2024-07-14 09:37:56.384319] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid825452 ] 00:28:12.090 EAL: No free 2048 kB hugepages reported on node 1 00:28:12.090 [2024-07-14 09:37:56.449449] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:12.090 [2024-07-14 09:37:56.537755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:13.985 Running I/O for 10 seconds... 00:28:13.985 09:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:13.985 09:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:28:13.985 09:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:13.985 09:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.985 09:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:13.985 09:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.985 09:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:13.985 09:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:13.985 09:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:28:13.985 09:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:28:13.985 09:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:28:13.985 09:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:28:13.985 09:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:13.985 09:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:13.985 09:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:13.985 09:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.985 09:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:14.244 09:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.244 09:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:28:14.244 09:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:28:14.244 09:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:14.503 09:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:14.503 09:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:14.503 09:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:14.503 09:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:14.503 09:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.503 09:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:14.503 09:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.503 09:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:28:14.503 09:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:28:14.503 09:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:28:14.503 09:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:28:14.503 09:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:28:14.503 09:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 825452 00:28:14.503 09:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 825452 ']' 00:28:14.503 09:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 825452 00:28:14.503 09:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:28:14.503 09:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:14.503 09:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 825452 00:28:14.503 09:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:14.503 09:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:14.503 09:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 825452' 00:28:14.503 killing process with pid 825452 00:28:14.503 09:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 825452 00:28:14.503 09:37:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 825452 00:28:14.503 Received shutdown signal, test time was about 0.773701 seconds 00:28:14.503 00:28:14.503 Latency(us) 00:28:14.503 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:14.503 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:14.503 Verification LBA range: start 0x0 length 0x400 00:28:14.503 Nvme1n1 : 0.74 344.43 21.53 0.00 0.00 182997.90 28544.57 203501.42 00:28:14.503 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:14.503 Verification LBA range: start 0x0 length 0x400 00:28:14.503 Nvme2n1 : 0.71 179.30 11.21 0.00 0.00 341521.07 24369.68 295154.73 00:28:14.503 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:14.503 Verification LBA range: start 0x0 length 0x400 00:28:14.503 Nvme3n1 : 0.77 159.94 10.00 0.00 0.00 372612.12 12379.02 372827.02 00:28:14.503 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:14.503 Verification LBA range: start 0x0 length 0x400 00:28:14.503 Nvme4n1 : 0.74 260.24 16.26 0.00 0.00 224078.51 22524.97 212822.09 00:28:14.503 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:14.503 Verification LBA range: start 0x0 length 0x400 00:28:14.503 Nvme5n1 : 0.70 274.30 17.14 0.00 0.00 205302.14 23204.60 214375.54 00:28:14.503 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:14.503 Verification LBA range: start 0x0 length 0x400 00:28:14.503 Nvme6n1 : 0.71 181.43 11.34 0.00 0.00 301688.60 40777.96 271853.04 00:28:14.503 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:14.503 Verification LBA range: start 0x0 length 0x400 00:28:14.503 Nvme7n1 : 0.77 165.62 10.35 0.00 0.00 327180.52 14078.10 372827.02 00:28:14.503 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:14.503 Verification LBA range: start 0x0 length 0x400 00:28:14.503 Nvme8n1 : 0.71 179.56 11.22 0.00 0.00 286902.04 27962.03 292047.83 00:28:14.503 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:14.503 Verification LBA range: start 0x0 length 0x400 00:28:14.503 Nvme9n1 : 0.72 266.92 16.68 0.00 0.00 188354.56 21359.88 212822.09 00:28:14.503 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:14.503 Verification LBA range: start 0x0 length 0x400 00:28:14.504 Nvme10n1 : 0.73 176.55 11.03 0.00 0.00 276596.81 60584.39 222142.77 00:28:14.504 =================================================================================================================== 00:28:14.504 Total : 2188.28 136.77 0.00 0.00 255562.20 12379.02 372827.02 00:28:14.761 09:37:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:28:15.693 09:38:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 825306 00:28:15.693 09:38:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:28:15.693 09:38:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:15.693 09:38:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:15.694 09:38:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:15.694 09:38:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:15.951 09:38:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:15.951 09:38:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:28:15.951 09:38:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:15.951 09:38:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:28:15.951 09:38:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:15.951 09:38:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:15.951 rmmod nvme_tcp 00:28:15.951 rmmod nvme_fabrics 00:28:15.951 rmmod nvme_keyring 00:28:15.951 09:38:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:15.951 09:38:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:28:15.951 09:38:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:28:15.951 09:38:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 825306 ']' 00:28:15.952 09:38:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 825306 00:28:15.952 09:38:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 825306 ']' 00:28:15.952 09:38:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 825306 00:28:15.952 09:38:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:28:15.952 09:38:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:15.952 09:38:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 825306 00:28:15.952 09:38:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:15.952 09:38:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:15.952 09:38:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 825306' 00:28:15.952 killing process with pid 825306 00:28:15.952 09:38:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 825306 00:28:15.952 09:38:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 825306 00:28:16.518 09:38:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:16.518 09:38:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:16.518 09:38:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:16.518 09:38:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:16.518 09:38:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:16.518 09:38:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:16.518 09:38:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:16.518 09:38:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:18.447 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:18.447 00:28:18.447 real 0m7.549s 00:28:18.447 user 0m22.406s 00:28:18.447 sys 0m1.450s 00:28:18.447 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:18.447 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:18.447 ************************************ 00:28:18.447 END TEST nvmf_shutdown_tc2 00:28:18.447 ************************************ 00:28:18.447 09:38:02 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:28:18.447 09:38:02 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:18.447 09:38:02 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:18.447 09:38:02 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:18.447 09:38:02 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:18.447 ************************************ 00:28:18.447 START TEST nvmf_shutdown_tc3 00:28:18.448 ************************************ 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:18.448 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:18.448 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:18.448 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:18.448 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:18.448 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:18.449 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:18.449 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:18.449 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:18.707 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:18.707 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:18.707 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:18.707 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:18.707 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:18.707 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:18.707 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:18.707 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:18.707 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:28:18.707 00:28:18.707 --- 10.0.0.2 ping statistics --- 00:28:18.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:18.707 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:28:18.707 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:18.707 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:18.707 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:28:18.707 00:28:18.707 --- 10.0.0.1 ping statistics --- 00:28:18.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:18.707 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:28:18.707 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:18.707 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:28:18.707 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:18.707 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:18.707 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:18.707 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:18.707 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:18.707 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:18.707 09:38:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:18.707 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:18.707 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:18.707 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:18.707 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:18.707 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=826358 00:28:18.707 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:18.707 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 826358 00:28:18.707 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 826358 ']' 00:28:18.707 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:18.707 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:18.707 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:18.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:18.707 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:18.707 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:18.707 [2024-07-14 09:38:03.058170] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:28:18.707 [2024-07-14 09:38:03.058247] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:18.707 EAL: No free 2048 kB hugepages reported on node 1 00:28:18.707 [2024-07-14 09:38:03.125936] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:18.964 [2024-07-14 09:38:03.217489] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:18.964 [2024-07-14 09:38:03.217555] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:18.964 [2024-07-14 09:38:03.217579] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:18.964 [2024-07-14 09:38:03.217592] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:18.964 [2024-07-14 09:38:03.217604] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:18.964 [2024-07-14 09:38:03.217702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:18.964 [2024-07-14 09:38:03.217796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:18.964 [2024-07-14 09:38:03.217860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:18.964 [2024-07-14 09:38:03.217862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:18.964 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:18.964 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:28:18.964 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:18.964 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:18.964 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:18.964 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:18.964 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:18.964 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.964 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:18.964 [2024-07-14 09:38:03.362566] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:18.964 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.964 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:18.964 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:18.964 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:18.964 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:18.964 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:18.964 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:18.964 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:18.964 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:18.964 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:18.964 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:18.964 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:18.964 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:18.964 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:18.964 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:18.964 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:18.964 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:18.964 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:18.964 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:18.964 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:18.964 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:18.964 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:18.964 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:18.964 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:18.964 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:18.964 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:18.964 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:18.964 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.964 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:19.221 Malloc1 00:28:19.221 [2024-07-14 09:38:03.439036] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:19.221 Malloc2 00:28:19.221 Malloc3 00:28:19.221 Malloc4 00:28:19.221 Malloc5 00:28:19.221 Malloc6 00:28:19.478 Malloc7 00:28:19.478 Malloc8 00:28:19.478 Malloc9 00:28:19.478 Malloc10 00:28:19.478 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.478 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:19.478 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:19.478 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:19.478 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=826535 00:28:19.478 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 826535 /var/tmp/bdevperf.sock 00:28:19.478 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 826535 ']' 00:28:19.478 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:19.478 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:19.478 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:19.478 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:19.478 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:19.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:19.478 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:28:19.478 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:19.478 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:28:19.478 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:19.478 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:19.478 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:19.478 { 00:28:19.478 "params": { 00:28:19.478 "name": "Nvme$subsystem", 00:28:19.478 "trtype": "$TEST_TRANSPORT", 00:28:19.478 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.478 "adrfam": "ipv4", 00:28:19.478 "trsvcid": "$NVMF_PORT", 00:28:19.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.479 "hdgst": ${hdgst:-false}, 00:28:19.479 "ddgst": ${ddgst:-false} 00:28:19.479 }, 00:28:19.479 "method": "bdev_nvme_attach_controller" 00:28:19.479 } 00:28:19.479 EOF 00:28:19.479 )") 00:28:19.479 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:19.479 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:19.479 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:19.479 { 00:28:19.479 "params": { 00:28:19.479 "name": "Nvme$subsystem", 00:28:19.479 "trtype": "$TEST_TRANSPORT", 00:28:19.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.479 "adrfam": "ipv4", 00:28:19.479 "trsvcid": "$NVMF_PORT", 00:28:19.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.479 "hdgst": ${hdgst:-false}, 00:28:19.479 "ddgst": ${ddgst:-false} 00:28:19.479 }, 00:28:19.479 "method": "bdev_nvme_attach_controller" 00:28:19.479 } 00:28:19.479 EOF 00:28:19.479 )") 00:28:19.479 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:19.479 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:19.479 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:19.479 { 00:28:19.479 "params": { 00:28:19.479 "name": "Nvme$subsystem", 00:28:19.479 "trtype": "$TEST_TRANSPORT", 00:28:19.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.479 "adrfam": "ipv4", 00:28:19.479 "trsvcid": "$NVMF_PORT", 00:28:19.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.479 "hdgst": ${hdgst:-false}, 00:28:19.479 "ddgst": ${ddgst:-false} 00:28:19.479 }, 00:28:19.479 "method": "bdev_nvme_attach_controller" 00:28:19.479 } 00:28:19.479 EOF 00:28:19.479 )") 00:28:19.479 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:19.479 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:19.479 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:19.479 { 00:28:19.479 "params": { 00:28:19.479 "name": "Nvme$subsystem", 00:28:19.479 "trtype": "$TEST_TRANSPORT", 00:28:19.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.479 "adrfam": "ipv4", 00:28:19.479 "trsvcid": "$NVMF_PORT", 00:28:19.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.479 "hdgst": ${hdgst:-false}, 00:28:19.479 "ddgst": ${ddgst:-false} 00:28:19.479 }, 00:28:19.479 "method": "bdev_nvme_attach_controller" 00:28:19.479 } 00:28:19.479 EOF 00:28:19.479 )") 00:28:19.479 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:19.479 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:19.479 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:19.479 { 00:28:19.479 "params": { 00:28:19.479 "name": "Nvme$subsystem", 00:28:19.479 "trtype": "$TEST_TRANSPORT", 00:28:19.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.479 "adrfam": "ipv4", 00:28:19.479 "trsvcid": "$NVMF_PORT", 00:28:19.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.479 "hdgst": ${hdgst:-false}, 00:28:19.479 "ddgst": ${ddgst:-false} 00:28:19.479 }, 00:28:19.479 "method": "bdev_nvme_attach_controller" 00:28:19.479 } 00:28:19.479 EOF 00:28:19.479 )") 00:28:19.479 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:19.479 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:19.479 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:19.479 { 00:28:19.479 "params": { 00:28:19.479 "name": "Nvme$subsystem", 00:28:19.479 "trtype": "$TEST_TRANSPORT", 00:28:19.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.479 "adrfam": "ipv4", 00:28:19.479 "trsvcid": "$NVMF_PORT", 00:28:19.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.479 "hdgst": ${hdgst:-false}, 00:28:19.479 "ddgst": ${ddgst:-false} 00:28:19.479 }, 00:28:19.479 "method": "bdev_nvme_attach_controller" 00:28:19.479 } 00:28:19.479 EOF 00:28:19.479 )") 00:28:19.479 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:19.479 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:19.479 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:19.479 { 00:28:19.479 "params": { 00:28:19.479 "name": "Nvme$subsystem", 00:28:19.479 "trtype": "$TEST_TRANSPORT", 00:28:19.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.479 "adrfam": "ipv4", 00:28:19.479 "trsvcid": "$NVMF_PORT", 00:28:19.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.479 "hdgst": ${hdgst:-false}, 00:28:19.479 "ddgst": ${ddgst:-false} 00:28:19.479 }, 00:28:19.479 "method": "bdev_nvme_attach_controller" 00:28:19.479 } 00:28:19.479 EOF 00:28:19.479 )") 00:28:19.479 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:19.479 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:19.479 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:19.479 { 00:28:19.479 "params": { 00:28:19.479 "name": "Nvme$subsystem", 00:28:19.479 "trtype": "$TEST_TRANSPORT", 00:28:19.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.479 "adrfam": "ipv4", 00:28:19.479 "trsvcid": "$NVMF_PORT", 00:28:19.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.479 "hdgst": ${hdgst:-false}, 00:28:19.479 "ddgst": ${ddgst:-false} 00:28:19.479 }, 00:28:19.479 "method": "bdev_nvme_attach_controller" 00:28:19.479 } 00:28:19.479 EOF 00:28:19.479 )") 00:28:19.479 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:19.479 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:19.737 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:19.737 { 00:28:19.737 "params": { 00:28:19.737 "name": "Nvme$subsystem", 00:28:19.737 "trtype": "$TEST_TRANSPORT", 00:28:19.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.737 "adrfam": "ipv4", 00:28:19.737 "trsvcid": "$NVMF_PORT", 00:28:19.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.737 "hdgst": ${hdgst:-false}, 00:28:19.737 "ddgst": ${ddgst:-false} 00:28:19.737 }, 00:28:19.737 "method": "bdev_nvme_attach_controller" 00:28:19.737 } 00:28:19.737 EOF 00:28:19.737 )") 00:28:19.737 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:19.737 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:19.737 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:19.737 { 00:28:19.737 "params": { 00:28:19.737 "name": "Nvme$subsystem", 00:28:19.737 "trtype": "$TEST_TRANSPORT", 00:28:19.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.737 "adrfam": "ipv4", 00:28:19.737 "trsvcid": "$NVMF_PORT", 00:28:19.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.737 "hdgst": ${hdgst:-false}, 00:28:19.737 "ddgst": ${ddgst:-false} 00:28:19.737 }, 00:28:19.737 "method": "bdev_nvme_attach_controller" 00:28:19.737 } 00:28:19.737 EOF 00:28:19.737 )") 00:28:19.737 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:19.737 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:28:19.737 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:28:19.737 09:38:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:19.737 "params": { 00:28:19.737 "name": "Nvme1", 00:28:19.737 "trtype": "tcp", 00:28:19.737 "traddr": "10.0.0.2", 00:28:19.737 "adrfam": "ipv4", 00:28:19.737 "trsvcid": "4420", 00:28:19.737 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:19.737 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:19.737 "hdgst": false, 00:28:19.737 "ddgst": false 00:28:19.737 }, 00:28:19.737 "method": "bdev_nvme_attach_controller" 00:28:19.737 },{ 00:28:19.737 "params": { 00:28:19.737 "name": "Nvme2", 00:28:19.737 "trtype": "tcp", 00:28:19.737 "traddr": "10.0.0.2", 00:28:19.737 "adrfam": "ipv4", 00:28:19.737 "trsvcid": "4420", 00:28:19.737 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:19.737 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:19.737 "hdgst": false, 00:28:19.737 "ddgst": false 00:28:19.737 }, 00:28:19.737 "method": "bdev_nvme_attach_controller" 00:28:19.737 },{ 00:28:19.737 "params": { 00:28:19.737 "name": "Nvme3", 00:28:19.737 "trtype": "tcp", 00:28:19.737 "traddr": "10.0.0.2", 00:28:19.737 "adrfam": "ipv4", 00:28:19.737 "trsvcid": "4420", 00:28:19.737 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:19.737 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:19.737 "hdgst": false, 00:28:19.737 "ddgst": false 00:28:19.737 }, 00:28:19.737 "method": "bdev_nvme_attach_controller" 00:28:19.737 },{ 00:28:19.738 "params": { 00:28:19.738 "name": "Nvme4", 00:28:19.738 "trtype": "tcp", 00:28:19.738 "traddr": "10.0.0.2", 00:28:19.738 "adrfam": "ipv4", 00:28:19.738 "trsvcid": "4420", 00:28:19.738 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:19.738 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:19.738 "hdgst": false, 00:28:19.738 "ddgst": false 00:28:19.738 }, 00:28:19.738 "method": "bdev_nvme_attach_controller" 00:28:19.738 },{ 00:28:19.738 "params": { 00:28:19.738 "name": "Nvme5", 00:28:19.738 "trtype": "tcp", 00:28:19.738 "traddr": "10.0.0.2", 00:28:19.738 "adrfam": "ipv4", 00:28:19.738 "trsvcid": "4420", 00:28:19.738 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:19.738 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:19.738 "hdgst": false, 00:28:19.738 "ddgst": false 00:28:19.738 }, 00:28:19.738 "method": "bdev_nvme_attach_controller" 00:28:19.738 },{ 00:28:19.738 "params": { 00:28:19.738 "name": "Nvme6", 00:28:19.738 "trtype": "tcp", 00:28:19.738 "traddr": "10.0.0.2", 00:28:19.738 "adrfam": "ipv4", 00:28:19.738 "trsvcid": "4420", 00:28:19.738 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:19.738 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:19.738 "hdgst": false, 00:28:19.738 "ddgst": false 00:28:19.738 }, 00:28:19.738 "method": "bdev_nvme_attach_controller" 00:28:19.738 },{ 00:28:19.738 "params": { 00:28:19.738 "name": "Nvme7", 00:28:19.738 "trtype": "tcp", 00:28:19.738 "traddr": "10.0.0.2", 00:28:19.738 "adrfam": "ipv4", 00:28:19.738 "trsvcid": "4420", 00:28:19.738 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:19.738 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:19.738 "hdgst": false, 00:28:19.738 "ddgst": false 00:28:19.738 }, 00:28:19.738 "method": "bdev_nvme_attach_controller" 00:28:19.738 },{ 00:28:19.738 "params": { 00:28:19.738 "name": "Nvme8", 00:28:19.738 "trtype": "tcp", 00:28:19.738 "traddr": "10.0.0.2", 00:28:19.738 "adrfam": "ipv4", 00:28:19.738 "trsvcid": "4420", 00:28:19.738 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:19.738 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:19.738 "hdgst": false, 00:28:19.738 "ddgst": false 00:28:19.738 }, 00:28:19.738 "method": "bdev_nvme_attach_controller" 00:28:19.738 },{ 00:28:19.738 "params": { 00:28:19.738 "name": "Nvme9", 00:28:19.738 "trtype": "tcp", 00:28:19.738 "traddr": "10.0.0.2", 00:28:19.738 "adrfam": "ipv4", 00:28:19.738 "trsvcid": "4420", 00:28:19.738 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:19.738 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:19.738 "hdgst": false, 00:28:19.738 "ddgst": false 00:28:19.738 }, 00:28:19.738 "method": "bdev_nvme_attach_controller" 00:28:19.738 },{ 00:28:19.738 "params": { 00:28:19.738 "name": "Nvme10", 00:28:19.738 "trtype": "tcp", 00:28:19.738 "traddr": "10.0.0.2", 00:28:19.738 "adrfam": "ipv4", 00:28:19.738 "trsvcid": "4420", 00:28:19.738 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:19.738 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:19.738 "hdgst": false, 00:28:19.738 "ddgst": false 00:28:19.738 }, 00:28:19.738 "method": "bdev_nvme_attach_controller" 00:28:19.738 }' 00:28:19.738 [2024-07-14 09:38:03.947229] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:28:19.738 [2024-07-14 09:38:03.947316] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid826535 ] 00:28:19.738 EAL: No free 2048 kB hugepages reported on node 1 00:28:19.738 [2024-07-14 09:38:04.010255] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.738 [2024-07-14 09:38:04.096945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:21.637 Running I/O for 10 seconds... 00:28:21.637 09:38:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:21.637 09:38:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:28:21.637 09:38:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:21.637 09:38:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.637 09:38:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:21.895 09:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.895 09:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:21.895 09:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:21.896 09:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:21.896 09:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:28:21.896 09:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:28:21.896 09:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:28:21.896 09:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:28:21.896 09:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:21.896 09:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:21.896 09:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:21.896 09:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.896 09:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:21.896 09:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.896 09:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:28:21.896 09:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:28:21.896 09:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:22.154 09:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:22.154 09:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:22.154 09:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:22.154 09:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:22.154 09:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.154 09:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:22.154 09:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.154 09:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:28:22.154 09:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:28:22.154 09:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:22.427 09:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:22.427 09:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:22.427 09:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:22.427 09:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:22.427 09:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.427 09:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:22.427 09:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.427 09:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=194 00:28:22.427 09:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 194 -ge 100 ']' 00:28:22.427 09:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:28:22.427 09:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:28:22.427 09:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:28:22.427 09:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 826358 00:28:22.427 09:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 826358 ']' 00:28:22.427 09:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 826358 00:28:22.427 09:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:28:22.427 09:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:22.427 09:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 826358 00:28:22.427 09:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:22.427 09:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:22.427 09:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 826358' 00:28:22.427 killing process with pid 826358 00:28:22.427 09:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 826358 00:28:22.427 09:38:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 826358 00:28:22.427 [2024-07-14 09:38:06.772629] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.772709] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.772733] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.772747] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.772760] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.772772] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.772785] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.772798] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.772810] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.772823] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.772836] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.772849] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.772879] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.772894] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.772907] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.772919] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.772932] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.772945] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.772957] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.772969] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.772982] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.772994] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.773007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.773020] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.773033] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.773056] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.773069] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.773082] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.773095] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.773107] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.773120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.773132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.773144] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.773157] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.773170] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.773182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.773198] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.773210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.773223] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.773235] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.773248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.773261] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.773273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.773285] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.773298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.773310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.773322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.773335] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.773347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.773360] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.773374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.773387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.427 [2024-07-14 09:38:06.773402] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.773415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.773428] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.773440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.773453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.773465] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.773478] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.773490] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.773503] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.773515] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.773527] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780af0 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.774965] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.774999] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775014] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775028] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775041] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775054] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775081] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775094] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775107] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775119] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775146] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775171] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775184] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775197] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775223] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775236] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775249] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775275] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775301] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775326] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775338] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775352] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775386] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775400] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775413] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775437] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775462] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775475] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775488] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775500] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775513] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775540] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775553] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775566] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775580] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775597] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775611] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775624] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775646] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775659] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775671] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775684] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775697] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775709] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775722] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775734] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775747] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775772] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775784] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775796] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775809] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775821] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775834] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.775846] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2692140 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.777379] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.777405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.777423] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.777436] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.777449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.777462] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.777474] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.777492] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.777504] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.777518] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.777531] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.777544] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.777556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.777579] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.777591] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.777604] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.777618] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.428 [2024-07-14 09:38:06.777642] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.777655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.777668] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.777681] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.777694] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.777707] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.777720] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.777732] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.777746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.777759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.777772] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.777785] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.777798] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.777811] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.777823] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.777836] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.777848] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.777884] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.777900] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.777913] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.777927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.777939] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.777953] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.777966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.777978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.777992] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.778004] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.778018] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.778031] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.778043] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.778057] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.778070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.778083] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.778096] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.778109] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.778122] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.778135] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.778148] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.778160] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.778182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.778194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.778207] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.778219] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.778231] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.778243] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.778259] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2780f90 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.780192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.780254] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.780270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.780282] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.780293] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.780305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.780322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.780334] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.780345] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.780357] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.780368] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.780386] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.780398] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.780411] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.780424] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.780435] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.780451] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.780463] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.780475] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.780487] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.780500] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.780512] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.780524] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.780536] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.780548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.780562] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.780587] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.780600] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.780613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.780627] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.780640] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.780652] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.780664] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.780676] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.780688] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.780700] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.780713] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.780725] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.780737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.780751] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.780764] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.780776] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.780788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.780801] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.780814] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.429 [2024-07-14 09:38:06.780826] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.780839] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.780876] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.780891] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.780904] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.780917] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.780929] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.780941] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.780957] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.780970] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.780983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.780995] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.781008] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.781021] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.781033] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.781046] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.781058] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.781070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2781430 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.782497] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.782530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.782546] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.782559] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.782571] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.782584] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.782597] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.782610] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.782622] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.782635] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.782648] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.782661] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.782673] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.782686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.782698] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.782711] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.782724] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.782749] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.782762] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.782775] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.782788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.782801] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.782814] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.782827] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.782839] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.782863] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.782884] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.782897] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.782910] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.782922] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.782935] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.782948] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.782960] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.782972] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.782985] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.782998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.783010] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.783023] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.783036] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.783049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.783062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.783075] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.783088] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.783100] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.783117] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.783130] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.783143] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.783180] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.783192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.783205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.783217] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.783229] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.783241] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.783254] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.783266] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.783278] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.783291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.783303] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.783315] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.783327] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.783339] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.783351] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.783363] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27818f0 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.785011] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.785037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.785051] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.785064] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.785077] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.785089] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.785102] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.785114] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.785131] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.430 [2024-07-14 09:38:06.785144] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785157] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785169] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785198] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785222] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785234] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785246] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785258] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785282] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785295] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785307] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785319] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785331] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785344] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785357] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785370] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785383] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785417] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785442] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785454] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785466] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785478] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785511] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785523] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785535] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785560] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785573] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785585] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785597] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785609] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785634] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785646] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785657] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785670] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785690] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785702] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785715] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785740] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785753] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785766] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785779] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785791] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785803] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785814] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785826] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785838] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.785876] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bd20 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.786718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.431 [2024-07-14 09:38:06.786759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.431 [2024-07-14 09:38:06.786777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.431 [2024-07-14 09:38:06.786791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.431 [2024-07-14 09:38:06.786805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.431 [2024-07-14 09:38:06.786819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.431 [2024-07-14 09:38:06.786833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.431 [2024-07-14 09:38:06.786846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.431 [2024-07-14 09:38:06.786876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x296bb10 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.786945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.431 [2024-07-14 09:38:06.786967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.431 [2024-07-14 09:38:06.786982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.431 [2024-07-14 09:38:06.786996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.431 [2024-07-14 09:38:06.787010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.431 [2024-07-14 09:38:06.787023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.431 [2024-07-14 09:38:06.787037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.431 [2024-07-14 09:38:06.787050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.431 [2024-07-14 09:38:06.787063] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2b0b350 is same with the state(5) to be set 00:28:22.431 [2024-07-14 09:38:06.787112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.431 [2024-07-14 09:38:06.787134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.431 [2024-07-14 09:38:06.787150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.432 [2024-07-14 09:38:06.787174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.432 [2024-07-14 09:38:06.787189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.432 [2024-07-14 09:38:06.787185] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.432 [2024-07-14 09:38:06.787203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-14 09:38:06.787211] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.432 the state(5) to be set 00:28:22.432 [2024-07-14 09:38:06.787227] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.432 [2024-07-14 09:38:06.787228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.432 [2024-07-14 09:38:06.787239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.432 [2024-07-14 09:38:06.787243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.432 [2024-07-14 09:38:06.787252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.432 [2024-07-14 09:38:06.787256] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x298d3d0 is same with the state(5) to be set 00:28:22.432 [2024-07-14 09:38:06.787265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.432 [2024-07-14 09:38:06.787278] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.432 [2024-07-14 09:38:06.787290] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.432 [2024-07-14 09:38:06.787302] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.432 [2024-07-14 09:38:06.787314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.432 [2024-07-14 09:38:06.787312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.432 [2024-07-14 09:38:06.787327] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.432 [2024-07-14 09:38:06.787334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.432 [2024-07-14 09:38:06.787339] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.432 [2024-07-14 09:38:06.787349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-07-14 09:38:06.787351] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with id:0 cdw10:00000000 cdw11:00000000 00:28:22.432 the state(5) to be set 00:28:22.432 [2024-07-14 09:38:06.787366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-14 09:38:06.787365] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.432 the state(5) to be set 00:28:22.432 [2024-07-14 09:38:06.787381] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with [2024-07-14 09:38:06.787382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsthe state(5) to be set 00:28:22.432 id:0 cdw10:00000000 cdw11:00000000 00:28:22.432 [2024-07-14 09:38:06.787396] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.432 [2024-07-14 09:38:06.787397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.432 [2024-07-14 09:38:06.787409] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.432 [2024-07-14 09:38:06.787413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.432 [2024-07-14 09:38:06.787431] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.432 [2024-07-14 09:38:06.787437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.432 [2024-07-14 09:38:06.787443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.432 [2024-07-14 09:38:06.787452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x296d370 is same with the state(5) to be set 00:28:22.432 [2024-07-14 09:38:06.787456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.432 [2024-07-14 09:38:06.787470] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.432 [2024-07-14 09:38:06.787482] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.432 [2024-07-14 09:38:06.787507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.432 [2024-07-14 09:38:06.787527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.432 [2024-07-14 09:38:06.787528] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.432 [2024-07-14 09:38:06.787543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-07-14 09:38:06.787545] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with id:0 cdw10:00000000 cdw11:00000000 00:28:22.432 the state(5) to be set 00:28:22.432 [2024-07-14 09:38:06.787559] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with [2024-07-14 09:38:06.787559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:28:22.432 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.432 [2024-07-14 09:38:06.787573] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.432 [2024-07-14 09:38:06.787576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.432 [2024-07-14 09:38:06.787586] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.432 [2024-07-14 09:38:06.787590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.432 [2024-07-14 09:38:06.787599] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.432 [2024-07-14 09:38:06.787605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.432 [2024-07-14 09:38:06.787612] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.432 [2024-07-14 09:38:06.787619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.432 [2024-07-14 09:38:06.787625] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.432 [2024-07-14 09:38:06.787633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2994490 is same with the state(5) to be set 00:28:22.432 [2024-07-14 09:38:06.787638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.432 [2024-07-14 09:38:06.787651] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.432 [2024-07-14 09:38:06.787663] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.432 [2024-07-14 09:38:06.787681] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.432 [2024-07-14 09:38:06.787686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.432 [2024-07-14 09:38:06.787695] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.432 [2024-07-14 09:38:06.787708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.432 [2024-07-14 09:38:06.787720] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with [2024-07-14 09:38:06.787725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsthe state(5) to be set 00:28:22.432 id:0 cdw10:00000000 cdw11:00000000 00:28:22.432 [2024-07-14 09:38:06.787742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.432 [2024-07-14 09:38:06.787745] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.432 [2024-07-14 09:38:06.787757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.432 [2024-07-14 09:38:06.787771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-14 09:38:06.787767] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.432 the state(5) to be set 00:28:22.432 [2024-07-14 09:38:06.787788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.432 [2024-07-14 09:38:06.787791] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.432 [2024-07-14 09:38:06.787801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.432 [2024-07-14 09:38:06.787815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534ee0 is same with the state(5) to be set 00:28:22.432 [2024-07-14 09:38:06.787812] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.432 [2024-07-14 09:38:06.787836] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.432 [2024-07-14 09:38:06.787850] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.432 [2024-07-14 09:38:06.787876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.432 [2024-07-14 09:38:06.787913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.432 [2024-07-14 09:38:06.787930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.432 [2024-07-14 09:38:06.787944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.432 [2024-07-14 09:38:06.787959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.432 [2024-07-14 09:38:06.787972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.432 [2024-07-14 09:38:06.787883] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.432 [2024-07-14 09:38:06.787986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.432 [2024-07-14 09:38:06.787993] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.432 [2024-07-14 09:38:06.788005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-14 09:38:06.788007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.432 the state(5) to be set 00:28:22.432 [2024-07-14 09:38:06.788021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x296d8c0 is same [2024-07-14 09:38:06.788021] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with with the state(5) to be set 00:28:22.432 the state(5) to be set 00:28:22.432 [2024-07-14 09:38:06.788037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.433 [2024-07-14 09:38:06.788049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.433 [2024-07-14 09:38:06.788061] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.433 [2024-07-14 09:38:06.788073] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.433 [2024-07-14 09:38:06.788085] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.433 [2024-07-14 09:38:06.788098] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.433 [2024-07-14 09:38:06.788111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.433 [2024-07-14 09:38:06.788132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.433 [2024-07-14 09:38:06.788154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.433 [2024-07-14 09:38:06.788181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.433 [2024-07-14 09:38:06.788205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.433 [2024-07-14 09:38:06.788562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.433 [2024-07-14 09:38:06.788588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.433 [2024-07-14 09:38:06.788615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.433 [2024-07-14 09:38:06.788238] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.433 [2024-07-14 09:38:06.788632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.433 [2024-07-14 09:38:06.788642] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.433 [2024-07-14 09:38:06.788649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.433 [2024-07-14 09:38:06.788660] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.433 [2024-07-14 09:38:06.788664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.433 [2024-07-14 09:38:06.788672] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.433 [2024-07-14 09:38:06.788680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:1[2024-07-14 09:38:06.788684] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.433 the state(5) to be set 00:28:22.433 [2024-07-14 09:38:06.788700] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c1c0 is same with the state(5) to be set 00:28:22.433 [2024-07-14 09:38:06.788701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.433 [2024-07-14 09:38:06.788729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.433 [2024-07-14 09:38:06.788743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.433 [2024-07-14 09:38:06.788759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.433 [2024-07-14 09:38:06.788773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.433 [2024-07-14 09:38:06.788789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.433 [2024-07-14 09:38:06.788803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.433 [2024-07-14 09:38:06.788819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.433 [2024-07-14 09:38:06.788833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.433 [2024-07-14 09:38:06.788848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.433 [2024-07-14 09:38:06.788862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.433 [2024-07-14 09:38:06.788890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.433 [2024-07-14 09:38:06.788905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.433 [2024-07-14 09:38:06.788920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.433 [2024-07-14 09:38:06.788935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.433 [2024-07-14 09:38:06.788950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.433 [2024-07-14 09:38:06.788964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.433 [2024-07-14 09:38:06.788980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.433 [2024-07-14 09:38:06.788994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.433 [2024-07-14 09:38:06.789010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.433 [2024-07-14 09:38:06.789024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.433 [2024-07-14 09:38:06.789040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.433 [2024-07-14 09:38:06.789054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.433 [2024-07-14 09:38:06.789074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.433 [2024-07-14 09:38:06.789089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.433 [2024-07-14 09:38:06.789105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.433 [2024-07-14 09:38:06.789119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.433 [2024-07-14 09:38:06.789135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.433 [2024-07-14 09:38:06.789148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.433 [2024-07-14 09:38:06.789164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.433 [2024-07-14 09:38:06.789182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.433 [2024-07-14 09:38:06.789197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.433 [2024-07-14 09:38:06.789211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.433 [2024-07-14 09:38:06.789227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.433 [2024-07-14 09:38:06.789246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.433 [2024-07-14 09:38:06.789262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.433 [2024-07-14 09:38:06.789276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.433 [2024-07-14 09:38:06.789308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.433 [2024-07-14 09:38:06.789322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.433 [2024-07-14 09:38:06.789338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.433 [2024-07-14 09:38:06.789352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.433 [2024-07-14 09:38:06.789367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.433 [2024-07-14 09:38:06.789382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.433 [2024-07-14 09:38:06.789398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.433 [2024-07-14 09:38:06.789411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.433 [2024-07-14 09:38:06.789430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.433 [2024-07-14 09:38:06.789444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.433 [2024-07-14 09:38:06.789460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.433 [2024-07-14 09:38:06.789477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.433 [2024-07-14 09:38:06.789493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.433 [2024-07-14 09:38:06.789506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.433 [2024-07-14 09:38:06.789521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.433 [2024-07-14 09:38:06.789534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.433 [2024-07-14 09:38:06.789565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.433 [2024-07-14 09:38:06.789579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.433 [2024-07-14 09:38:06.789595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.433 [2024-07-14 09:38:06.789609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.433 [2024-07-14 09:38:06.789625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.433 [2024-07-14 09:38:06.789639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.433 [2024-07-14 09:38:06.789656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.433 [2024-07-14 09:38:06.789669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.433 [2024-07-14 09:38:06.789685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.433 [2024-07-14 09:38:06.789699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.434 [2024-07-14 09:38:06.789714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.434 [2024-07-14 09:38:06.789728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.434 [2024-07-14 09:38:06.789728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691800 is same with the state(5) to be set 00:28:22.434 [2024-07-14 09:38:06.789744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.434 [2024-07-14 09:38:06.789759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.434 [2024-07-14 09:38:06.789774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.434 [2024-07-14 09:38:06.789788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.434 [2024-07-14 09:38:06.789804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.434 [2024-07-14 09:38:06.789818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.434 [2024-07-14 09:38:06.789833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.434 [2024-07-14 09:38:06.789851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.434 [2024-07-14 09:38:06.789877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.434 [2024-07-14 09:38:06.789894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.434 [2024-07-14 09:38:06.789911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.434 [2024-07-14 09:38:06.789925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.434 [2024-07-14 09:38:06.789942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.434 [2024-07-14 09:38:06.789956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.434 [2024-07-14 09:38:06.789972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.434 [2024-07-14 09:38:06.789987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.434 [2024-07-14 09:38:06.790002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.434 [2024-07-14 09:38:06.790017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.434 [2024-07-14 09:38:06.790033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.434 [2024-07-14 09:38:06.790047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.434 [2024-07-14 09:38:06.790063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128[2024-07-14 09:38:06.790055] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.434 the state(5) to be set 00:28:22.434 [2024-07-14 09:38:06.790081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.434 [2024-07-14 09:38:06.790085] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.434 [2024-07-14 09:38:06.790097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.434 [2024-07-14 09:38:06.790099] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.434 [2024-07-14 09:38:06.790111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.434 [2024-07-14 09:38:06.790113] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.434 [2024-07-14 09:38:06.790126] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with [2024-07-14 09:38:06.790127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128the state(5) to be set 00:28:22.434 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.434 [2024-07-14 09:38:06.790141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.434 [2024-07-14 09:38:06.790143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.434 [2024-07-14 09:38:06.790154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.434 [2024-07-14 09:38:06.790164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128[2024-07-14 09:38:06.790167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.434 the state(5) to be set 00:28:22.434 [2024-07-14 09:38:06.790180] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with [2024-07-14 09:38:06.790181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:28:22.434 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.434 [2024-07-14 09:38:06.790195] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.434 [2024-07-14 09:38:06.790203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.434 [2024-07-14 09:38:06.790208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.434 [2024-07-14 09:38:06.790218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.434 [2024-07-14 09:38:06.790221] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.434 [2024-07-14 09:38:06.790234] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.434 [2024-07-14 09:38:06.790234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.434 [2024-07-14 09:38:06.790246] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.434 [2024-07-14 09:38:06.790249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.434 [2024-07-14 09:38:06.790258] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.434 [2024-07-14 09:38:06.790266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.434 [2024-07-14 09:38:06.790271] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.434 [2024-07-14 09:38:06.790280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.434 [2024-07-14 09:38:06.790283] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.434 [2024-07-14 09:38:06.790296] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with [2024-07-14 09:38:06.790297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:12the state(5) to be set 00:28:22.434 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.434 [2024-07-14 09:38:06.790310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.434 [2024-07-14 09:38:06.790312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.434 [2024-07-14 09:38:06.790323] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.434 [2024-07-14 09:38:06.790329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.434 [2024-07-14 09:38:06.790335] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.434 [2024-07-14 09:38:06.790344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.434 [2024-07-14 09:38:06.790353] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.434 [2024-07-14 09:38:06.790360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.434 [2024-07-14 09:38:06.790367] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.434 [2024-07-14 09:38:06.790375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.434 [2024-07-14 09:38:06.790380] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.434 [2024-07-14 09:38:06.790393] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.434 [2024-07-14 09:38:06.790394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.434 [2024-07-14 09:38:06.790405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.434 [2024-07-14 09:38:06.790410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.434 [2024-07-14 09:38:06.790418] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.434 [2024-07-14 09:38:06.790442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.434 [2024-07-14 09:38:06.790449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.434 [2024-07-14 09:38:06.790457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.434 [2024-07-14 09:38:06.790462] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.434 [2024-07-14 09:38:06.790474] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with [2024-07-14 09:38:06.790474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:12the state(5) to be set 00:28:22.434 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.434 [2024-07-14 09:38:06.790488] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.434 [2024-07-14 09:38:06.790491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.434 [2024-07-14 09:38:06.790502] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.434 [2024-07-14 09:38:06.790507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.434 [2024-07-14 09:38:06.790515] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.434 [2024-07-14 09:38:06.790521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.434 [2024-07-14 09:38:06.790527] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.434 [2024-07-14 09:38:06.790537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:12[2024-07-14 09:38:06.790539] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.434 the state(5) to be set 00:28:22.435 [2024-07-14 09:38:06.790553] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with [2024-07-14 09:38:06.790553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:28:22.435 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.435 [2024-07-14 09:38:06.790570] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.435 [2024-07-14 09:38:06.790574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.435 [2024-07-14 09:38:06.790583] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.435 [2024-07-14 09:38:06.790588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.435 [2024-07-14 09:38:06.790596] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.435 [2024-07-14 09:38:06.790604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.435 [2024-07-14 09:38:06.790609] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.435 [2024-07-14 09:38:06.790625] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with [2024-07-14 09:38:06.790625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:28:22.435 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.435 [2024-07-14 09:38:06.790638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.435 [2024-07-14 09:38:06.790642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.435 [2024-07-14 09:38:06.790651] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.435 [2024-07-14 09:38:06.790657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.435 [2024-07-14 09:38:06.790663] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.435 [2024-07-14 09:38:06.790676] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.435 [2024-07-14 09:38:06.790689] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.435 [2024-07-14 09:38:06.790701] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.435 [2024-07-14 09:38:06.790709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such devi[2024-07-14 09:38:06.790713] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with ce or address) on qpair id 1 00:28:22.435 the state(5) to be set 00:28:22.435 [2024-07-14 09:38:06.790728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.435 [2024-07-14 09:38:06.790740] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.435 [2024-07-14 09:38:06.790752] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.435 [2024-07-14 09:38:06.790764] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.435 [2024-07-14 09:38:06.790776] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.435 [2024-07-14 09:38:06.790788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with [2024-07-14 09:38:06.790786] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2940170 was disconnected and frthe state(5) to be set 00:28:22.435 eed. reset controller. 00:28:22.435 [2024-07-14 09:38:06.790805] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.435 [2024-07-14 09:38:06.790817] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.435 [2024-07-14 09:38:06.790829] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.435 [2024-07-14 09:38:06.790841] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.435 [2024-07-14 09:38:06.790887] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.435 [2024-07-14 09:38:06.790903] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.435 [2024-07-14 09:38:06.790915] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.435 [2024-07-14 09:38:06.790927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.435 [2024-07-14 09:38:06.790940] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2691ca0 is same with the state(5) to be set 00:28:22.435 [2024-07-14 09:38:06.793384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.435 [2024-07-14 09:38:06.793412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.435 [2024-07-14 09:38:06.793436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.435 [2024-07-14 09:38:06.793453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.435 [2024-07-14 09:38:06.793469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.435 [2024-07-14 09:38:06.793485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.435 [2024-07-14 09:38:06.793502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.435 [2024-07-14 09:38:06.793517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.436 [2024-07-14 09:38:06.793533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.436 [2024-07-14 09:38:06.793548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.436 [2024-07-14 09:38:06.793564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.436 [2024-07-14 09:38:06.793579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.436 [2024-07-14 09:38:06.793595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.436 [2024-07-14 09:38:06.793610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.436 [2024-07-14 09:38:06.793626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.436 [2024-07-14 09:38:06.793641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.436 [2024-07-14 09:38:06.793663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.436 [2024-07-14 09:38:06.793679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.436 [2024-07-14 09:38:06.793696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.436 [2024-07-14 09:38:06.793711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.436 [2024-07-14 09:38:06.793727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.436 [2024-07-14 09:38:06.793742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.436 [2024-07-14 09:38:06.793759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.436 [2024-07-14 09:38:06.793774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.436 [2024-07-14 09:38:06.793790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.436 [2024-07-14 09:38:06.793806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.436 [2024-07-14 09:38:06.793822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.436 [2024-07-14 09:38:06.793837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.436 [2024-07-14 09:38:06.793853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.436 [2024-07-14 09:38:06.793880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.436 [2024-07-14 09:38:06.793898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.436 [2024-07-14 09:38:06.793913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.436 [2024-07-14 09:38:06.793930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.436 [2024-07-14 09:38:06.793944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.436 [2024-07-14 09:38:06.793961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.436 [2024-07-14 09:38:06.793976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.436 [2024-07-14 09:38:06.793992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.436 [2024-07-14 09:38:06.794006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.436 [2024-07-14 09:38:06.794023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.436 [2024-07-14 09:38:06.794038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.436 [2024-07-14 09:38:06.794054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.436 [2024-07-14 09:38:06.794073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.436 [2024-07-14 09:38:06.794091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.436 [2024-07-14 09:38:06.794106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.436 [2024-07-14 09:38:06.794122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.436 [2024-07-14 09:38:06.794137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.436 [2024-07-14 09:38:06.794153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.436 [2024-07-14 09:38:06.794172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.436 [2024-07-14 09:38:06.794188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.436 [2024-07-14 09:38:06.794203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.436 [2024-07-14 09:38:06.794219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.436 [2024-07-14 09:38:06.794235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.436 [2024-07-14 09:38:06.794251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.436 [2024-07-14 09:38:06.794265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.436 [2024-07-14 09:38:06.794281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.436 [2024-07-14 09:38:06.794296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.436 [2024-07-14 09:38:06.794312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.436 [2024-07-14 09:38:06.794326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.436 [2024-07-14 09:38:06.794342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.436 [2024-07-14 09:38:06.794356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.436 [2024-07-14 09:38:06.794372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.436 [2024-07-14 09:38:06.794387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.436 [2024-07-14 09:38:06.794402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.436 [2024-07-14 09:38:06.794417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.436 [2024-07-14 09:38:06.794432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.436 [2024-07-14 09:38:06.794447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.436 [2024-07-14 09:38:06.794466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.436 [2024-07-14 09:38:06.794481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.437 [2024-07-14 09:38:06.794497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.437 [2024-07-14 09:38:06.794511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.437 [2024-07-14 09:38:06.794528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.437 [2024-07-14 09:38:06.794541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.437 [2024-07-14 09:38:06.794558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.437 [2024-07-14 09:38:06.794572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.437 [2024-07-14 09:38:06.794588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.437 [2024-07-14 09:38:06.794603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.437 [2024-07-14 09:38:06.794619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.437 [2024-07-14 09:38:06.794634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.437 [2024-07-14 09:38:06.794658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.437 [2024-07-14 09:38:06.794673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.437 [2024-07-14 09:38:06.794689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.437 [2024-07-14 09:38:06.794703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.437 [2024-07-14 09:38:06.794718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.437 [2024-07-14 09:38:06.794744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.437 [2024-07-14 09:38:06.794760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.437 [2024-07-14 09:38:06.794775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.437 [2024-07-14 09:38:06.794791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.437 [2024-07-14 09:38:06.794805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.437 [2024-07-14 09:38:06.794821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.437 [2024-07-14 09:38:06.794835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.437 [2024-07-14 09:38:06.794852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.437 [2024-07-14 09:38:06.794880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.437 [2024-07-14 09:38:06.794898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.437 [2024-07-14 09:38:06.794913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.437 [2024-07-14 09:38:06.794930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.437 [2024-07-14 09:38:06.794944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.437 [2024-07-14 09:38:06.794960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.437 [2024-07-14 09:38:06.794974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.437 [2024-07-14 09:38:06.794990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.437 [2024-07-14 09:38:06.795004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.437 [2024-07-14 09:38:06.795020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.437 [2024-07-14 09:38:06.795034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.437 [2024-07-14 09:38:06.795050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.437 [2024-07-14 09:38:06.795064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.437 [2024-07-14 09:38:06.795079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.437 [2024-07-14 09:38:06.795094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.437 [2024-07-14 09:38:06.795109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.437 [2024-07-14 09:38:06.795123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.437 [2024-07-14 09:38:06.795139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.437 [2024-07-14 09:38:06.795153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.437 [2024-07-14 09:38:06.795177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.437 [2024-07-14 09:38:06.795192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.437 [2024-07-14 09:38:06.795208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.437 [2024-07-14 09:38:06.795222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.437 [2024-07-14 09:38:06.795237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.437 [2024-07-14 09:38:06.795262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.437 [2024-07-14 09:38:06.795281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.437 [2024-07-14 09:38:06.795296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.437 [2024-07-14 09:38:06.795312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.437 [2024-07-14 09:38:06.795333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.437 [2024-07-14 09:38:06.795349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.437 [2024-07-14 09:38:06.795364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.437 [2024-07-14 09:38:06.795381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.437 [2024-07-14 09:38:06.795397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.437 [2024-07-14 09:38:06.795412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.437 [2024-07-14 09:38:06.795426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.437 [2024-07-14 09:38:06.795442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.437 [2024-07-14 09:38:06.795457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.437 [2024-07-14 09:38:06.795556] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2944020 was disconnected and freed. reset controller. 00:28:22.437 [2024-07-14 09:38:06.795710] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:28:22.437 [2024-07-14 09:38:06.795750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2994490 (9): Bad file descriptor 00:28:22.437 [2024-07-14 09:38:06.797376] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:28:22.437 [2024-07-14 09:38:06.797454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2affc40 (9): Bad file descriptor 00:28:22.437 [2024-07-14 09:38:06.797513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x296bb10 (9): Bad file descriptor 00:28:22.437 [2024-07-14 09:38:06.797572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.437 [2024-07-14 09:38:06.797594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.437 [2024-07-14 09:38:06.797610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.437 [2024-07-14 09:38:06.797625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.437 [2024-07-14 09:38:06.797640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.437 [2024-07-14 09:38:06.797654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.437 [2024-07-14 09:38:06.797668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.437 [2024-07-14 09:38:06.797682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.437 [2024-07-14 09:38:06.797704] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2996400 is same with the state(5) to be set 00:28:22.437 [2024-07-14 09:38:06.797734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2b0b350 (9): Bad file descriptor 00:28:22.437 [2024-07-14 09:38:06.797769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x298d3d0 (9): Bad file descriptor 00:28:22.437 [2024-07-14 09:38:06.797820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.437 [2024-07-14 09:38:06.797841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.437 [2024-07-14 09:38:06.797877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.437 [2024-07-14 09:38:06.797893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.437 [2024-07-14 09:38:06.797907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.437 [2024-07-14 09:38:06.797921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.437 [2024-07-14 09:38:06.797936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.437 [2024-07-14 09:38:06.797950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.437 [2024-07-14 09:38:06.797963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2b13030 is same with the state(5) to be set 00:28:22.438 [2024-07-14 09:38:06.797992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x296d370 (9): Bad file descriptor 00:28:22.438 [2024-07-14 09:38:06.798024] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2534ee0 (9): Bad file descriptor 00:28:22.438 [2024-07-14 09:38:06.798055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x296d8c0 (9): Bad file descriptor 00:28:22.438 [2024-07-14 09:38:06.798862] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:22.438 [2024-07-14 09:38:06.799084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.438 [2024-07-14 09:38:06.799113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2994490 with addr=10.0.0.2, port=4420 00:28:22.438 [2024-07-14 09:38:06.799130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2994490 is same with the state(5) to be set 00:28:22.438 [2024-07-14 09:38:06.799214] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:22.438 [2024-07-14 09:38:06.799281] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:22.438 [2024-07-14 09:38:06.799344] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:22.438 [2024-07-14 09:38:06.799412] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:22.438 [2024-07-14 09:38:06.799551] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:22.438 [2024-07-14 09:38:06.799907] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:22.438 [2024-07-14 09:38:06.800108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.438 [2024-07-14 09:38:06.800136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2affc40 with addr=10.0.0.2, port=4420 00:28:22.438 [2024-07-14 09:38:06.800153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2affc40 is same with the state(5) to be set 00:28:22.438 [2024-07-14 09:38:06.800173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2994490 (9): Bad file descriptor 00:28:22.438 [2024-07-14 09:38:06.800333] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:22.438 [2024-07-14 09:38:06.800409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2affc40 (9): Bad file descriptor 00:28:22.438 [2024-07-14 09:38:06.800434] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:28:22.438 [2024-07-14 09:38:06.800448] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:28:22.438 [2024-07-14 09:38:06.800464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:28:22.438 [2024-07-14 09:38:06.800561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.438 [2024-07-14 09:38:06.800584] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:28:22.438 [2024-07-14 09:38:06.800601] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:28:22.438 [2024-07-14 09:38:06.800614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:28:22.438 [2024-07-14 09:38:06.800673] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.438 [2024-07-14 09:38:06.807432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2996400 (9): Bad file descriptor 00:28:22.438 [2024-07-14 09:38:06.807531] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2b13030 (9): Bad file descriptor 00:28:22.438 [2024-07-14 09:38:06.807732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.438 [2024-07-14 09:38:06.807760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.438 [2024-07-14 09:38:06.807792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.438 [2024-07-14 09:38:06.807809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.438 [2024-07-14 09:38:06.807827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.438 [2024-07-14 09:38:06.807843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.438 [2024-07-14 09:38:06.807875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.438 [2024-07-14 09:38:06.807892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.438 [2024-07-14 09:38:06.807909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.438 [2024-07-14 09:38:06.807924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.438 [2024-07-14 09:38:06.807940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.438 [2024-07-14 09:38:06.807955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.438 [2024-07-14 09:38:06.807972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.438 [2024-07-14 09:38:06.807986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.438 [2024-07-14 09:38:06.808003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.438 [2024-07-14 09:38:06.808030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.438 [2024-07-14 09:38:06.808048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.438 [2024-07-14 09:38:06.808063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.438 [2024-07-14 09:38:06.808080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.438 [2024-07-14 09:38:06.808095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.438 [2024-07-14 09:38:06.808112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.438 [2024-07-14 09:38:06.808127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.438 [2024-07-14 09:38:06.808144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.438 [2024-07-14 09:38:06.808158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.438 [2024-07-14 09:38:06.808183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.438 [2024-07-14 09:38:06.808198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.438 [2024-07-14 09:38:06.808214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.438 [2024-07-14 09:38:06.808229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.438 [2024-07-14 09:38:06.808245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.438 [2024-07-14 09:38:06.808261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.438 [2024-07-14 09:38:06.808278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.438 [2024-07-14 09:38:06.808293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.438 [2024-07-14 09:38:06.808309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.438 [2024-07-14 09:38:06.808323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.438 [2024-07-14 09:38:06.808340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.438 [2024-07-14 09:38:06.808355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.438 [2024-07-14 09:38:06.808372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.438 [2024-07-14 09:38:06.808386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.438 [2024-07-14 09:38:06.808403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.438 [2024-07-14 09:38:06.808418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.438 [2024-07-14 09:38:06.808434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.438 [2024-07-14 09:38:06.808452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.438 [2024-07-14 09:38:06.808469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.438 [2024-07-14 09:38:06.808484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.438 [2024-07-14 09:38:06.808500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.438 [2024-07-14 09:38:06.808515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.438 [2024-07-14 09:38:06.808531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.438 [2024-07-14 09:38:06.808546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.438 [2024-07-14 09:38:06.808562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.438 [2024-07-14 09:38:06.808578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.438 [2024-07-14 09:38:06.808594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.438 [2024-07-14 09:38:06.808609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.438 [2024-07-14 09:38:06.808626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.438 [2024-07-14 09:38:06.808640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.438 [2024-07-14 09:38:06.808656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.438 [2024-07-14 09:38:06.808671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.438 [2024-07-14 09:38:06.808687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.438 [2024-07-14 09:38:06.808701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.438 [2024-07-14 09:38:06.808718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.438 [2024-07-14 09:38:06.808733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.438 [2024-07-14 09:38:06.808749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.438 [2024-07-14 09:38:06.808764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.439 [2024-07-14 09:38:06.808780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.439 [2024-07-14 09:38:06.808795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.439 [2024-07-14 09:38:06.808811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.439 [2024-07-14 09:38:06.808826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.439 [2024-07-14 09:38:06.808846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.439 [2024-07-14 09:38:06.808875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.439 [2024-07-14 09:38:06.808893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.439 [2024-07-14 09:38:06.808908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.439 [2024-07-14 09:38:06.808925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.439 [2024-07-14 09:38:06.808940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.439 [2024-07-14 09:38:06.808957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.439 [2024-07-14 09:38:06.808972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.439 [2024-07-14 09:38:06.808988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.439 [2024-07-14 09:38:06.809003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.439 [2024-07-14 09:38:06.809020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.439 [2024-07-14 09:38:06.809035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.439 [2024-07-14 09:38:06.809051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.439 [2024-07-14 09:38:06.809066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.439 [2024-07-14 09:38:06.809083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.439 [2024-07-14 09:38:06.809097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.439 [2024-07-14 09:38:06.809115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.439 [2024-07-14 09:38:06.809130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.439 [2024-07-14 09:38:06.809146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.439 [2024-07-14 09:38:06.809161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.439 [2024-07-14 09:38:06.809177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.439 [2024-07-14 09:38:06.809192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.439 [2024-07-14 09:38:06.809208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.439 [2024-07-14 09:38:06.809222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.439 [2024-07-14 09:38:06.809239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.439 [2024-07-14 09:38:06.809257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.439 [2024-07-14 09:38:06.809274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.439 [2024-07-14 09:38:06.809289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.439 [2024-07-14 09:38:06.809306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.439 [2024-07-14 09:38:06.809321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.439 [2024-07-14 09:38:06.809337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.439 [2024-07-14 09:38:06.809352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.439 [2024-07-14 09:38:06.809368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.439 [2024-07-14 09:38:06.809382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.439 [2024-07-14 09:38:06.809399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.439 [2024-07-14 09:38:06.809414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.439 [2024-07-14 09:38:06.809430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.439 [2024-07-14 09:38:06.809445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.439 [2024-07-14 09:38:06.809461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.439 [2024-07-14 09:38:06.809476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.439 [2024-07-14 09:38:06.809492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.439 [2024-07-14 09:38:06.809507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.439 [2024-07-14 09:38:06.809523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.439 [2024-07-14 09:38:06.809537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.439 [2024-07-14 09:38:06.809554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.439 [2024-07-14 09:38:06.809569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.439 [2024-07-14 09:38:06.809585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.439 [2024-07-14 09:38:06.809600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.439 [2024-07-14 09:38:06.809616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.439 [2024-07-14 09:38:06.809631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.439 [2024-07-14 09:38:06.809651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.439 [2024-07-14 09:38:06.809667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.439 [2024-07-14 09:38:06.809683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.439 [2024-07-14 09:38:06.809697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.439 [2024-07-14 09:38:06.809713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.439 [2024-07-14 09:38:06.809729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.439 [2024-07-14 09:38:06.809744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.439 [2024-07-14 09:38:06.809759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.439 [2024-07-14 09:38:06.809775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.439 [2024-07-14 09:38:06.809790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.439 [2024-07-14 09:38:06.809806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.439 [2024-07-14 09:38:06.809821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.439 [2024-07-14 09:38:06.809836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25380e0 is same with the state(5) to be set 00:28:22.439 [2024-07-14 09:38:06.811147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.439 [2024-07-14 09:38:06.811182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.439 [2024-07-14 09:38:06.811202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.439 [2024-07-14 09:38:06.811230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.439 [2024-07-14 09:38:06.811246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.439 [2024-07-14 09:38:06.811260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.439 [2024-07-14 09:38:06.811277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.439 [2024-07-14 09:38:06.811292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.439 [2024-07-14 09:38:06.811309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.439 [2024-07-14 09:38:06.811323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.439 [2024-07-14 09:38:06.811340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.439 [2024-07-14 09:38:06.811354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.439 [2024-07-14 09:38:06.811376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.439 [2024-07-14 09:38:06.811392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.439 [2024-07-14 09:38:06.811409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.439 [2024-07-14 09:38:06.811424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.439 [2024-07-14 09:38:06.811441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.439 [2024-07-14 09:38:06.811455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.439 [2024-07-14 09:38:06.811473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.440 [2024-07-14 09:38:06.811488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.440 [2024-07-14 09:38:06.811504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.440 [2024-07-14 09:38:06.811518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.440 [2024-07-14 09:38:06.811534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.440 [2024-07-14 09:38:06.811549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.440 [2024-07-14 09:38:06.811566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.440 [2024-07-14 09:38:06.811580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.440 [2024-07-14 09:38:06.811597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.440 [2024-07-14 09:38:06.811612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.440 [2024-07-14 09:38:06.811628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.440 [2024-07-14 09:38:06.811643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.440 [2024-07-14 09:38:06.811659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.440 [2024-07-14 09:38:06.811674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.440 [2024-07-14 09:38:06.811691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.440 [2024-07-14 09:38:06.811705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.440 [2024-07-14 09:38:06.811721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.440 [2024-07-14 09:38:06.811736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.440 [2024-07-14 09:38:06.811752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.440 [2024-07-14 09:38:06.811771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.440 [2024-07-14 09:38:06.811788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.440 [2024-07-14 09:38:06.811802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.440 [2024-07-14 09:38:06.811819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.440 [2024-07-14 09:38:06.811834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.440 [2024-07-14 09:38:06.811851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.440 [2024-07-14 09:38:06.811884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.440 [2024-07-14 09:38:06.811902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.440 [2024-07-14 09:38:06.811916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.440 [2024-07-14 09:38:06.811933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.440 [2024-07-14 09:38:06.811947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.440 [2024-07-14 09:38:06.811964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.440 [2024-07-14 09:38:06.811978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.440 [2024-07-14 09:38:06.811995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.440 [2024-07-14 09:38:06.812010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.440 [2024-07-14 09:38:06.812026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.440 [2024-07-14 09:38:06.812041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.440 [2024-07-14 09:38:06.812058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.440 [2024-07-14 09:38:06.812074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.440 [2024-07-14 09:38:06.812091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.440 [2024-07-14 09:38:06.812106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.440 [2024-07-14 09:38:06.812123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.440 [2024-07-14 09:38:06.812138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.440 [2024-07-14 09:38:06.812156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.440 [2024-07-14 09:38:06.812180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.440 [2024-07-14 09:38:06.812200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.440 [2024-07-14 09:38:06.812215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.440 [2024-07-14 09:38:06.812231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.440 [2024-07-14 09:38:06.812246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.440 [2024-07-14 09:38:06.812262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.440 [2024-07-14 09:38:06.812277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.440 [2024-07-14 09:38:06.812293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.440 [2024-07-14 09:38:06.812308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.440 [2024-07-14 09:38:06.812324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.440 [2024-07-14 09:38:06.812339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.440 [2024-07-14 09:38:06.812356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.440 [2024-07-14 09:38:06.812370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.440 [2024-07-14 09:38:06.812387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.440 [2024-07-14 09:38:06.812403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.440 [2024-07-14 09:38:06.812419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.440 [2024-07-14 09:38:06.812434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.440 [2024-07-14 09:38:06.812450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.440 [2024-07-14 09:38:06.812465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.440 [2024-07-14 09:38:06.812482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.440 [2024-07-14 09:38:06.812497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.440 [2024-07-14 09:38:06.812513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.440 [2024-07-14 09:38:06.812528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.440 [2024-07-14 09:38:06.812545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.440 [2024-07-14 09:38:06.812560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.440 [2024-07-14 09:38:06.812577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.440 [2024-07-14 09:38:06.812595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.440 [2024-07-14 09:38:06.812611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.440 [2024-07-14 09:38:06.812626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.440 [2024-07-14 09:38:06.812643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.440 [2024-07-14 09:38:06.812658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.440 [2024-07-14 09:38:06.812675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.441 [2024-07-14 09:38:06.812689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.441 [2024-07-14 09:38:06.812706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.441 [2024-07-14 09:38:06.812720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.441 [2024-07-14 09:38:06.812737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.441 [2024-07-14 09:38:06.812751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.441 [2024-07-14 09:38:06.812768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.441 [2024-07-14 09:38:06.812782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.441 [2024-07-14 09:38:06.812798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.441 [2024-07-14 09:38:06.812813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.441 [2024-07-14 09:38:06.812830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.441 [2024-07-14 09:38:06.812844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.441 [2024-07-14 09:38:06.812863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.441 [2024-07-14 09:38:06.812885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.441 [2024-07-14 09:38:06.812901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.441 [2024-07-14 09:38:06.812916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.441 [2024-07-14 09:38:06.812932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.441 [2024-07-14 09:38:06.812947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.441 [2024-07-14 09:38:06.812963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.441 [2024-07-14 09:38:06.812978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.441 [2024-07-14 09:38:06.812998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.441 [2024-07-14 09:38:06.813013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.441 [2024-07-14 09:38:06.813029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.441 [2024-07-14 09:38:06.813044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.441 [2024-07-14 09:38:06.813060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.441 [2024-07-14 09:38:06.813075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.441 [2024-07-14 09:38:06.813092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.441 [2024-07-14 09:38:06.813107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.441 [2024-07-14 09:38:06.813122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.441 [2024-07-14 09:38:06.813137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.441 [2024-07-14 09:38:06.813154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.441 [2024-07-14 09:38:06.813176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.441 [2024-07-14 09:38:06.813193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.441 [2024-07-14 09:38:06.813207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.441 [2024-07-14 09:38:06.813223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.441 [2024-07-14 09:38:06.813239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.441 [2024-07-14 09:38:06.813254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25392e0 is same with the state(5) to be set 00:28:22.441 [2024-07-14 09:38:06.814486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.441 [2024-07-14 09:38:06.814510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.441 [2024-07-14 09:38:06.814533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.441 [2024-07-14 09:38:06.814548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.441 [2024-07-14 09:38:06.814565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.441 [2024-07-14 09:38:06.814580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.441 [2024-07-14 09:38:06.814597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.441 [2024-07-14 09:38:06.814611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.441 [2024-07-14 09:38:06.814633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.441 [2024-07-14 09:38:06.814649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.441 [2024-07-14 09:38:06.814665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.441 [2024-07-14 09:38:06.814680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.441 [2024-07-14 09:38:06.814696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.441 [2024-07-14 09:38:06.814711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.441 [2024-07-14 09:38:06.814728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.441 [2024-07-14 09:38:06.814742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.441 [2024-07-14 09:38:06.814758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.441 [2024-07-14 09:38:06.814773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.441 [2024-07-14 09:38:06.814790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.441 [2024-07-14 09:38:06.814805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.441 [2024-07-14 09:38:06.814821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.441 [2024-07-14 09:38:06.814836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.441 [2024-07-14 09:38:06.814852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.441 [2024-07-14 09:38:06.814883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.441 [2024-07-14 09:38:06.814899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.441 [2024-07-14 09:38:06.814915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.441 [2024-07-14 09:38:06.814932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.441 [2024-07-14 09:38:06.814948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.441 [2024-07-14 09:38:06.814964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.441 [2024-07-14 09:38:06.814979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.441 [2024-07-14 09:38:06.814995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.441 [2024-07-14 09:38:06.815010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.441 [2024-07-14 09:38:06.815027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.441 [2024-07-14 09:38:06.815046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.441 [2024-07-14 09:38:06.815063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.441 [2024-07-14 09:38:06.815078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.441 [2024-07-14 09:38:06.815094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.441 [2024-07-14 09:38:06.815109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.441 [2024-07-14 09:38:06.815127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.441 [2024-07-14 09:38:06.815141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.441 [2024-07-14 09:38:06.815169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.441 [2024-07-14 09:38:06.815183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.441 [2024-07-14 09:38:06.815199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.441 [2024-07-14 09:38:06.815214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.441 [2024-07-14 09:38:06.815232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.441 [2024-07-14 09:38:06.815246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.441 [2024-07-14 09:38:06.815263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.441 [2024-07-14 09:38:06.815277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.441 [2024-07-14 09:38:06.815293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.441 [2024-07-14 09:38:06.815308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.442 [2024-07-14 09:38:06.815324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.442 [2024-07-14 09:38:06.815339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.442 [2024-07-14 09:38:06.815355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.442 [2024-07-14 09:38:06.815369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.442 [2024-07-14 09:38:06.815386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.442 [2024-07-14 09:38:06.815400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.442 [2024-07-14 09:38:06.815417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.442 [2024-07-14 09:38:06.815431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.442 [2024-07-14 09:38:06.815451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.442 [2024-07-14 09:38:06.815467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.442 [2024-07-14 09:38:06.815483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.442 [2024-07-14 09:38:06.815498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.442 [2024-07-14 09:38:06.815514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.442 [2024-07-14 09:38:06.815529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.442 [2024-07-14 09:38:06.815546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.442 [2024-07-14 09:38:06.815560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.442 [2024-07-14 09:38:06.815578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.442 [2024-07-14 09:38:06.815593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.442 [2024-07-14 09:38:06.815609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.442 [2024-07-14 09:38:06.815624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.442 [2024-07-14 09:38:06.815640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.442 [2024-07-14 09:38:06.815654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.442 [2024-07-14 09:38:06.815671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.442 [2024-07-14 09:38:06.815685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.442 [2024-07-14 09:38:06.815701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.442 [2024-07-14 09:38:06.815716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.442 [2024-07-14 09:38:06.815733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.442 [2024-07-14 09:38:06.815747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.442 [2024-07-14 09:38:06.815764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.442 [2024-07-14 09:38:06.815778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.442 [2024-07-14 09:38:06.815795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.442 [2024-07-14 09:38:06.815809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.442 [2024-07-14 09:38:06.815825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.442 [2024-07-14 09:38:06.815847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.442 [2024-07-14 09:38:06.815870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.442 [2024-07-14 09:38:06.815887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.442 [2024-07-14 09:38:06.815904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.442 [2024-07-14 09:38:06.815918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.442 [2024-07-14 09:38:06.815934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.442 [2024-07-14 09:38:06.815949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.442 [2024-07-14 09:38:06.815966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.442 [2024-07-14 09:38:06.815980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.442 [2024-07-14 09:38:06.815996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.442 [2024-07-14 09:38:06.816010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.442 [2024-07-14 09:38:06.816026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.442 [2024-07-14 09:38:06.816041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.442 [2024-07-14 09:38:06.816057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.442 [2024-07-14 09:38:06.816072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.442 [2024-07-14 09:38:06.816088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.442 [2024-07-14 09:38:06.816103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.442 [2024-07-14 09:38:06.816120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.442 [2024-07-14 09:38:06.816134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.442 [2024-07-14 09:38:06.816151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.442 [2024-07-14 09:38:06.816176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.442 [2024-07-14 09:38:06.816192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.442 [2024-07-14 09:38:06.816207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.442 [2024-07-14 09:38:06.816234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.442 [2024-07-14 09:38:06.816249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.442 [2024-07-14 09:38:06.816265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.442 [2024-07-14 09:38:06.816283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.442 [2024-07-14 09:38:06.816301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.442 [2024-07-14 09:38:06.816315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.442 [2024-07-14 09:38:06.816331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.442 [2024-07-14 09:38:06.816346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.442 [2024-07-14 09:38:06.816362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.442 [2024-07-14 09:38:06.816376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.442 [2024-07-14 09:38:06.816393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.442 [2024-07-14 09:38:06.816407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.442 [2024-07-14 09:38:06.816423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.442 [2024-07-14 09:38:06.816438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.442 [2024-07-14 09:38:06.816454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.442 [2024-07-14 09:38:06.816468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.442 [2024-07-14 09:38:06.816484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.442 [2024-07-14 09:38:06.816499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.442 [2024-07-14 09:38:06.816515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.442 [2024-07-14 09:38:06.816530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.442 [2024-07-14 09:38:06.816546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.442 [2024-07-14 09:38:06.816560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.442 [2024-07-14 09:38:06.816575] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x293ed20 is same with the state(5) to be set 00:28:22.442 [2024-07-14 09:38:06.817877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.442 [2024-07-14 09:38:06.817901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.442 [2024-07-14 09:38:06.817924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.442 [2024-07-14 09:38:06.817940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.442 [2024-07-14 09:38:06.817957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.442 [2024-07-14 09:38:06.817977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.443 [2024-07-14 09:38:06.817995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-14 09:38:06.818009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.443 [2024-07-14 09:38:06.818026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-14 09:38:06.818040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.443 [2024-07-14 09:38:06.818057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-14 09:38:06.818072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.443 [2024-07-14 09:38:06.818088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-14 09:38:06.818104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.443 [2024-07-14 09:38:06.818120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-14 09:38:06.818135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.443 [2024-07-14 09:38:06.818151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-14 09:38:06.818165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.443 [2024-07-14 09:38:06.818181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-14 09:38:06.818196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.443 [2024-07-14 09:38:06.818212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-14 09:38:06.818227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.443 [2024-07-14 09:38:06.818243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-14 09:38:06.818258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.443 [2024-07-14 09:38:06.818274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-14 09:38:06.818289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.443 [2024-07-14 09:38:06.818305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-14 09:38:06.818319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.443 [2024-07-14 09:38:06.818335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-14 09:38:06.818350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.443 [2024-07-14 09:38:06.818381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-14 09:38:06.818398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.443 [2024-07-14 09:38:06.818415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-14 09:38:06.818429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.443 [2024-07-14 09:38:06.818446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-14 09:38:06.818460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.443 [2024-07-14 09:38:06.818476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-14 09:38:06.818491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.443 [2024-07-14 09:38:06.818517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-14 09:38:06.818532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.443 [2024-07-14 09:38:06.818548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-14 09:38:06.818563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.443 [2024-07-14 09:38:06.818580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-14 09:38:06.818595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.443 [2024-07-14 09:38:06.818611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-14 09:38:06.818626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.443 [2024-07-14 09:38:06.818643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-14 09:38:06.818657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.443 [2024-07-14 09:38:06.818674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-14 09:38:06.818689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.443 [2024-07-14 09:38:06.818709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-14 09:38:06.818724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.443 [2024-07-14 09:38:06.818740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-14 09:38:06.818755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.443 [2024-07-14 09:38:06.818772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-14 09:38:06.818791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.443 [2024-07-14 09:38:06.818808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-14 09:38:06.818823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.443 [2024-07-14 09:38:06.818839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-14 09:38:06.818854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.443 [2024-07-14 09:38:06.818881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-14 09:38:06.818897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.443 [2024-07-14 09:38:06.818914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-14 09:38:06.818929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.443 [2024-07-14 09:38:06.818945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-14 09:38:06.818960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.443 [2024-07-14 09:38:06.818976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-14 09:38:06.818991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.443 [2024-07-14 09:38:06.819007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-14 09:38:06.819023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.443 [2024-07-14 09:38:06.819039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-14 09:38:06.819054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.443 [2024-07-14 09:38:06.819070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-14 09:38:06.819085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.443 [2024-07-14 09:38:06.819101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-14 09:38:06.819116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.443 [2024-07-14 09:38:06.819132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-14 09:38:06.819147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.443 [2024-07-14 09:38:06.819163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-14 09:38:06.819187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.443 [2024-07-14 09:38:06.819207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-14 09:38:06.819222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.443 [2024-07-14 09:38:06.819239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-14 09:38:06.819254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.443 [2024-07-14 09:38:06.819270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-14 09:38:06.819285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.443 [2024-07-14 09:38:06.819302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-14 09:38:06.819317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.443 [2024-07-14 09:38:06.819333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-14 09:38:06.819347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.443 [2024-07-14 09:38:06.819364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.443 [2024-07-14 09:38:06.819379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.444 [2024-07-14 09:38:06.819395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.444 [2024-07-14 09:38:06.819410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.444 [2024-07-14 09:38:06.819426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.444 [2024-07-14 09:38:06.819441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.444 [2024-07-14 09:38:06.819458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.444 [2024-07-14 09:38:06.819473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.444 [2024-07-14 09:38:06.819490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.444 [2024-07-14 09:38:06.819506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.444 [2024-07-14 09:38:06.819523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.444 [2024-07-14 09:38:06.819539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.444 [2024-07-14 09:38:06.819556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.444 [2024-07-14 09:38:06.819571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.444 [2024-07-14 09:38:06.819589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.444 [2024-07-14 09:38:06.819608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.444 [2024-07-14 09:38:06.819625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.444 [2024-07-14 09:38:06.819640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.444 [2024-07-14 09:38:06.819656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.444 [2024-07-14 09:38:06.819671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.444 [2024-07-14 09:38:06.819688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.444 [2024-07-14 09:38:06.819703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.444 [2024-07-14 09:38:06.819719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.444 [2024-07-14 09:38:06.819734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.444 [2024-07-14 09:38:06.819751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.444 [2024-07-14 09:38:06.819777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.444 [2024-07-14 09:38:06.819793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.444 [2024-07-14 09:38:06.819808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.444 [2024-07-14 09:38:06.819824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.444 [2024-07-14 09:38:06.819839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.444 [2024-07-14 09:38:06.819856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.444 [2024-07-14 09:38:06.819881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.444 [2024-07-14 09:38:06.819899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.444 [2024-07-14 09:38:06.819913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.444 [2024-07-14 09:38:06.819929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.444 [2024-07-14 09:38:06.819944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.444 [2024-07-14 09:38:06.819960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.444 [2024-07-14 09:38:06.819974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.444 [2024-07-14 09:38:06.819989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2af0690 is same with the state(5) to be set 00:28:22.444 [2024-07-14 09:38:06.821252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.444 [2024-07-14 09:38:06.821280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.444 [2024-07-14 09:38:06.821302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.444 [2024-07-14 09:38:06.821318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.444 [2024-07-14 09:38:06.821335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.444 [2024-07-14 09:38:06.821350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.444 [2024-07-14 09:38:06.821366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.444 [2024-07-14 09:38:06.821381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.444 [2024-07-14 09:38:06.821397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.444 [2024-07-14 09:38:06.821412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.444 [2024-07-14 09:38:06.821429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.444 [2024-07-14 09:38:06.821444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.444 [2024-07-14 09:38:06.821461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.444 [2024-07-14 09:38:06.821475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.444 [2024-07-14 09:38:06.821491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.444 [2024-07-14 09:38:06.821506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.444 [2024-07-14 09:38:06.821522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.444 [2024-07-14 09:38:06.821537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.444 [2024-07-14 09:38:06.821554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.444 [2024-07-14 09:38:06.821568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.444 [2024-07-14 09:38:06.821585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.444 [2024-07-14 09:38:06.821599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.444 [2024-07-14 09:38:06.821617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.444 [2024-07-14 09:38:06.821631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.444 [2024-07-14 09:38:06.821648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.444 [2024-07-14 09:38:06.821663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.444 [2024-07-14 09:38:06.821683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.444 [2024-07-14 09:38:06.821698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.444 [2024-07-14 09:38:06.821715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.444 [2024-07-14 09:38:06.821730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.444 [2024-07-14 09:38:06.821746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.444 [2024-07-14 09:38:06.821760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.444 [2024-07-14 09:38:06.821777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.444 [2024-07-14 09:38:06.821792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.444 [2024-07-14 09:38:06.821808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.444 [2024-07-14 09:38:06.821823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.444 [2024-07-14 09:38:06.821839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.444 [2024-07-14 09:38:06.821874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.445 [2024-07-14 09:38:06.821893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.445 [2024-07-14 09:38:06.821909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.445 [2024-07-14 09:38:06.821925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.445 [2024-07-14 09:38:06.821940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.445 [2024-07-14 09:38:06.821956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.445 [2024-07-14 09:38:06.821970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.445 [2024-07-14 09:38:06.821986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.445 [2024-07-14 09:38:06.822001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.445 [2024-07-14 09:38:06.822017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.445 [2024-07-14 09:38:06.822032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.445 [2024-07-14 09:38:06.822049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.445 [2024-07-14 09:38:06.822064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.445 [2024-07-14 09:38:06.822080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.445 [2024-07-14 09:38:06.822101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.445 [2024-07-14 09:38:06.822119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.445 [2024-07-14 09:38:06.822134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.445 [2024-07-14 09:38:06.822151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.445 [2024-07-14 09:38:06.822166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.445 [2024-07-14 09:38:06.822195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.445 [2024-07-14 09:38:06.822210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.445 [2024-07-14 09:38:06.822226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.445 [2024-07-14 09:38:06.822241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.445 [2024-07-14 09:38:06.822258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.445 [2024-07-14 09:38:06.822273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.445 [2024-07-14 09:38:06.822290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.445 [2024-07-14 09:38:06.822305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.445 [2024-07-14 09:38:06.822321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.445 [2024-07-14 09:38:06.822336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.445 [2024-07-14 09:38:06.822353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.445 [2024-07-14 09:38:06.822367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.445 [2024-07-14 09:38:06.822384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.445 [2024-07-14 09:38:06.822399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.445 [2024-07-14 09:38:06.822416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.445 [2024-07-14 09:38:06.822431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.445 [2024-07-14 09:38:06.822447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.445 [2024-07-14 09:38:06.822462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.445 [2024-07-14 09:38:06.822478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.445 [2024-07-14 09:38:06.822493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.445 [2024-07-14 09:38:06.822513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.445 [2024-07-14 09:38:06.822528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.445 [2024-07-14 09:38:06.822545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.445 [2024-07-14 09:38:06.822559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.445 [2024-07-14 09:38:06.822576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.445 [2024-07-14 09:38:06.822591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.445 [2024-07-14 09:38:06.822607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.445 [2024-07-14 09:38:06.822622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.445 [2024-07-14 09:38:06.822639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.445 [2024-07-14 09:38:06.822654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.445 [2024-07-14 09:38:06.822671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.445 [2024-07-14 09:38:06.822686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.445 [2024-07-14 09:38:06.822702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.445 [2024-07-14 09:38:06.822717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.445 [2024-07-14 09:38:06.822734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.445 [2024-07-14 09:38:06.822749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.445 [2024-07-14 09:38:06.822765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.445 [2024-07-14 09:38:06.822779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.445 [2024-07-14 09:38:06.822795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.445 [2024-07-14 09:38:06.822810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.445 [2024-07-14 09:38:06.822826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.445 [2024-07-14 09:38:06.822841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.445 [2024-07-14 09:38:06.822857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.445 [2024-07-14 09:38:06.822888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.445 [2024-07-14 09:38:06.822905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.445 [2024-07-14 09:38:06.822923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.445 [2024-07-14 09:38:06.822941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.445 [2024-07-14 09:38:06.822956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.445 [2024-07-14 09:38:06.822972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.445 [2024-07-14 09:38:06.822987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.445 [2024-07-14 09:38:06.823004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.445 [2024-07-14 09:38:06.823019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.445 [2024-07-14 09:38:06.823035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.445 [2024-07-14 09:38:06.823050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.445 [2024-07-14 09:38:06.823066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.445 [2024-07-14 09:38:06.823081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.445 [2024-07-14 09:38:06.823099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.445 [2024-07-14 09:38:06.823113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.445 [2024-07-14 09:38:06.823130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.445 [2024-07-14 09:38:06.823145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.445 [2024-07-14 09:38:06.823161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.445 [2024-07-14 09:38:06.823185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.445 [2024-07-14 09:38:06.823203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.445 [2024-07-14 09:38:06.823218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.445 [2024-07-14 09:38:06.823235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.445 [2024-07-14 09:38:06.823250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.445 [2024-07-14 09:38:06.823267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.445 [2024-07-14 09:38:06.823281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.446 [2024-07-14 09:38:06.823297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.446 [2024-07-14 09:38:06.823312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.446 [2024-07-14 09:38:06.823332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.446 [2024-07-14 09:38:06.823348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.446 [2024-07-14 09:38:06.823362] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2941620 is same with the state(5) to be set 00:28:22.446 [2024-07-14 09:38:06.824632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.446 [2024-07-14 09:38:06.824656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.446 [2024-07-14 09:38:06.824678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.446 [2024-07-14 09:38:06.824695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.446 [2024-07-14 09:38:06.824711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.446 [2024-07-14 09:38:06.824726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.446 [2024-07-14 09:38:06.824743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.446 [2024-07-14 09:38:06.824757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.446 [2024-07-14 09:38:06.824774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.446 [2024-07-14 09:38:06.824788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.446 [2024-07-14 09:38:06.824805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.446 [2024-07-14 09:38:06.824819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.446 [2024-07-14 09:38:06.824835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.446 [2024-07-14 09:38:06.824850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.446 [2024-07-14 09:38:06.824878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.446 [2024-07-14 09:38:06.824894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.446 [2024-07-14 09:38:06.824910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.446 [2024-07-14 09:38:06.824924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.446 [2024-07-14 09:38:06.824941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.446 [2024-07-14 09:38:06.824955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.446 [2024-07-14 09:38:06.824972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.446 [2024-07-14 09:38:06.824987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.446 [2024-07-14 09:38:06.825009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.446 [2024-07-14 09:38:06.825025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.446 [2024-07-14 09:38:06.825041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.446 [2024-07-14 09:38:06.825056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.446 [2024-07-14 09:38:06.825072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.446 [2024-07-14 09:38:06.825087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.446 [2024-07-14 09:38:06.825104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.446 [2024-07-14 09:38:06.825119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.446 [2024-07-14 09:38:06.825135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.446 [2024-07-14 09:38:06.825150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.446 [2024-07-14 09:38:06.825166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.446 [2024-07-14 09:38:06.825190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.446 [2024-07-14 09:38:06.825206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.446 [2024-07-14 09:38:06.825221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.446 [2024-07-14 09:38:06.825237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.446 [2024-07-14 09:38:06.825253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.446 [2024-07-14 09:38:06.825270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.446 [2024-07-14 09:38:06.825285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.446 [2024-07-14 09:38:06.825302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.446 [2024-07-14 09:38:06.825316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.446 [2024-07-14 09:38:06.825333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.446 [2024-07-14 09:38:06.825347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.446 [2024-07-14 09:38:06.825364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.446 [2024-07-14 09:38:06.825378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.446 [2024-07-14 09:38:06.825395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.446 [2024-07-14 09:38:06.825413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.446 [2024-07-14 09:38:06.825430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.446 [2024-07-14 09:38:06.825445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.446 [2024-07-14 09:38:06.825461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.446 [2024-07-14 09:38:06.825476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.446 [2024-07-14 09:38:06.825492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.446 [2024-07-14 09:38:06.825508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.446 [2024-07-14 09:38:06.825525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.446 [2024-07-14 09:38:06.825539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.446 [2024-07-14 09:38:06.825557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.446 [2024-07-14 09:38:06.825572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.446 [2024-07-14 09:38:06.825588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.446 [2024-07-14 09:38:06.825603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.446 [2024-07-14 09:38:06.825620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.446 [2024-07-14 09:38:06.825634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.446 [2024-07-14 09:38:06.825651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.446 [2024-07-14 09:38:06.825666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.446 [2024-07-14 09:38:06.825683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.446 [2024-07-14 09:38:06.825697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.446 [2024-07-14 09:38:06.825714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.446 [2024-07-14 09:38:06.825728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.446 [2024-07-14 09:38:06.825745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.446 [2024-07-14 09:38:06.825759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.446 [2024-07-14 09:38:06.825776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.446 [2024-07-14 09:38:06.825790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.446 [2024-07-14 09:38:06.825806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.446 [2024-07-14 09:38:06.825829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.446 [2024-07-14 09:38:06.825847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.446 [2024-07-14 09:38:06.825879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.446 [2024-07-14 09:38:06.825897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.446 [2024-07-14 09:38:06.825912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.446 [2024-07-14 09:38:06.825929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.446 [2024-07-14 09:38:06.825944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.446 [2024-07-14 09:38:06.825960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.447 [2024-07-14 09:38:06.825975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.447 [2024-07-14 09:38:06.825992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.447 [2024-07-14 09:38:06.826007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.447 [2024-07-14 09:38:06.826024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.447 [2024-07-14 09:38:06.826040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.447 [2024-07-14 09:38:06.826057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.447 [2024-07-14 09:38:06.826072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.447 [2024-07-14 09:38:06.826088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.447 [2024-07-14 09:38:06.826102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.447 [2024-07-14 09:38:06.826118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.447 [2024-07-14 09:38:06.826133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.447 [2024-07-14 09:38:06.826150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.447 [2024-07-14 09:38:06.826175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.447 [2024-07-14 09:38:06.826191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.447 [2024-07-14 09:38:06.826206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.447 [2024-07-14 09:38:06.826223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.447 [2024-07-14 09:38:06.826238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.447 [2024-07-14 09:38:06.826258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.447 [2024-07-14 09:38:06.826273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.447 [2024-07-14 09:38:06.826290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.447 [2024-07-14 09:38:06.826305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.447 [2024-07-14 09:38:06.826322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.447 [2024-07-14 09:38:06.826337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.447 [2024-07-14 09:38:06.826353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.447 [2024-07-14 09:38:06.826368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.447 [2024-07-14 09:38:06.826384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.447 [2024-07-14 09:38:06.826399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.447 [2024-07-14 09:38:06.826416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.447 [2024-07-14 09:38:06.826431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.447 [2024-07-14 09:38:06.826447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.447 [2024-07-14 09:38:06.826462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.447 [2024-07-14 09:38:06.826479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.447 [2024-07-14 09:38:06.826494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.447 [2024-07-14 09:38:06.826510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.447 [2024-07-14 09:38:06.826525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.447 [2024-07-14 09:38:06.826542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.447 [2024-07-14 09:38:06.826557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.447 [2024-07-14 09:38:06.826573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.447 [2024-07-14 09:38:06.826588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.447 [2024-07-14 09:38:06.826604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.447 [2024-07-14 09:38:06.826619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.447 [2024-07-14 09:38:06.826635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.447 [2024-07-14 09:38:06.826653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.447 [2024-07-14 09:38:06.826670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.447 [2024-07-14 09:38:06.826685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.447 [2024-07-14 09:38:06.826702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.447 [2024-07-14 09:38:06.826716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.447 [2024-07-14 09:38:06.826732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29468e0 is same with the state(5) to be set 00:28:22.447 [2024-07-14 09:38:06.828434] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.447 [2024-07-14 09:38:06.828468] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:28:22.447 [2024-07-14 09:38:06.828487] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:28:22.447 [2024-07-14 09:38:06.828590] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:22.447 [2024-07-14 09:38:06.828626] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:22.447 [2024-07-14 09:38:06.828647] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:22.447 [2024-07-14 09:38:06.828678] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:22.447 [2024-07-14 09:38:06.828795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:28:22.447 [2024-07-14 09:38:06.828822] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:28:22.447 [2024-07-14 09:38:06.828839] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:28:22.447 [2024-07-14 09:38:06.828874] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:28:22.447 [2024-07-14 09:38:06.829231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.447 [2024-07-14 09:38:06.829262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2534ee0 with addr=10.0.0.2, port=4420 00:28:22.447 [2024-07-14 09:38:06.829280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534ee0 is same with the state(5) to be set 00:28:22.447 [2024-07-14 09:38:06.829440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.447 [2024-07-14 09:38:06.829465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x296d8c0 with addr=10.0.0.2, port=4420 00:28:22.447 [2024-07-14 09:38:06.829481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x296d8c0 is same with the state(5) to be set 00:28:22.447 [2024-07-14 09:38:06.829641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.447 [2024-07-14 09:38:06.829666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x296d370 with addr=10.0.0.2, port=4420 00:28:22.447 [2024-07-14 09:38:06.829682] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x296d370 is same with the state(5) to be set 00:28:22.447 [2024-07-14 09:38:06.831034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.447 [2024-07-14 09:38:06.831060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.447 [2024-07-14 09:38:06.831102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.447 [2024-07-14 09:38:06.831119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.447 [2024-07-14 09:38:06.831136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.447 [2024-07-14 09:38:06.831152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.447 [2024-07-14 09:38:06.831177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.447 [2024-07-14 09:38:06.831192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.447 [2024-07-14 09:38:06.831209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.447 [2024-07-14 09:38:06.831232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.447 [2024-07-14 09:38:06.831248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.447 [2024-07-14 09:38:06.831263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.447 [2024-07-14 09:38:06.831279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.447 [2024-07-14 09:38:06.831294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.447 [2024-07-14 09:38:06.831310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.447 [2024-07-14 09:38:06.831325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.447 [2024-07-14 09:38:06.831341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.447 [2024-07-14 09:38:06.831357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.447 [2024-07-14 09:38:06.831373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.447 [2024-07-14 09:38:06.831388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.448 [2024-07-14 09:38:06.831405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.448 [2024-07-14 09:38:06.831420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.448 [2024-07-14 09:38:06.831437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.448 [2024-07-14 09:38:06.831452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.448 [2024-07-14 09:38:06.831469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.448 [2024-07-14 09:38:06.831484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.448 [2024-07-14 09:38:06.831501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.448 [2024-07-14 09:38:06.831519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.448 [2024-07-14 09:38:06.831537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.448 [2024-07-14 09:38:06.831552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.448 [2024-07-14 09:38:06.831568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.448 [2024-07-14 09:38:06.831583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.448 [2024-07-14 09:38:06.831600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.448 [2024-07-14 09:38:06.831614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.448 [2024-07-14 09:38:06.831631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.448 [2024-07-14 09:38:06.831646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.448 [2024-07-14 09:38:06.831663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.448 [2024-07-14 09:38:06.831678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.448 [2024-07-14 09:38:06.831695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.448 [2024-07-14 09:38:06.831712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.448 [2024-07-14 09:38:06.831728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.448 [2024-07-14 09:38:06.831743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.448 [2024-07-14 09:38:06.831760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.448 [2024-07-14 09:38:06.831775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.448 [2024-07-14 09:38:06.831791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.448 [2024-07-14 09:38:06.831806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.448 [2024-07-14 09:38:06.831823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.448 [2024-07-14 09:38:06.831838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.448 [2024-07-14 09:38:06.831872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.448 [2024-07-14 09:38:06.831889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.448 [2024-07-14 09:38:06.831906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.448 [2024-07-14 09:38:06.831920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.448 [2024-07-14 09:38:06.831937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.448 [2024-07-14 09:38:06.831956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.448 [2024-07-14 09:38:06.831972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.448 [2024-07-14 09:38:06.831988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.448 [2024-07-14 09:38:06.832004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.448 [2024-07-14 09:38:06.832019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.448 [2024-07-14 09:38:06.832035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.448 [2024-07-14 09:38:06.832050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.448 [2024-07-14 09:38:06.832067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.448 [2024-07-14 09:38:06.832082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.448 [2024-07-14 09:38:06.832099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.448 [2024-07-14 09:38:06.832115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.448 [2024-07-14 09:38:06.832132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.448 [2024-07-14 09:38:06.832147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.448 [2024-07-14 09:38:06.832168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.448 [2024-07-14 09:38:06.832183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.448 [2024-07-14 09:38:06.832200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.448 [2024-07-14 09:38:06.832215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.448 [2024-07-14 09:38:06.832231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.448 [2024-07-14 09:38:06.832247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.448 [2024-07-14 09:38:06.832263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.448 [2024-07-14 09:38:06.832278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.448 [2024-07-14 09:38:06.832295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.448 [2024-07-14 09:38:06.832310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.448 [2024-07-14 09:38:06.832326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.448 [2024-07-14 09:38:06.832341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.448 [2024-07-14 09:38:06.832361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.448 [2024-07-14 09:38:06.832387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.448 [2024-07-14 09:38:06.832404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.448 [2024-07-14 09:38:06.832419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.448 [2024-07-14 09:38:06.832436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.448 [2024-07-14 09:38:06.832450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.448 [2024-07-14 09:38:06.832468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.448 [2024-07-14 09:38:06.832483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.448 [2024-07-14 09:38:06.832499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.448 [2024-07-14 09:38:06.832513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.448 [2024-07-14 09:38:06.832530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.448 [2024-07-14 09:38:06.832544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.448 [2024-07-14 09:38:06.832561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.448 [2024-07-14 09:38:06.832576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.448 [2024-07-14 09:38:06.832593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.448 [2024-07-14 09:38:06.832607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.448 [2024-07-14 09:38:06.832624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.448 [2024-07-14 09:38:06.832639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.449 [2024-07-14 09:38:06.832655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.449 [2024-07-14 09:38:06.832670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.449 [2024-07-14 09:38:06.832687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.449 [2024-07-14 09:38:06.832701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.449 [2024-07-14 09:38:06.832718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.449 [2024-07-14 09:38:06.832732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.449 [2024-07-14 09:38:06.832749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.449 [2024-07-14 09:38:06.832767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.449 [2024-07-14 09:38:06.832783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.449 [2024-07-14 09:38:06.832798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.449 [2024-07-14 09:38:06.832814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.449 [2024-07-14 09:38:06.832828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.449 [2024-07-14 09:38:06.832844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.449 [2024-07-14 09:38:06.832863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.449 [2024-07-14 09:38:06.832888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.449 [2024-07-14 09:38:06.832904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.449 [2024-07-14 09:38:06.832919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.449 [2024-07-14 09:38:06.832934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.449 [2024-07-14 09:38:06.832951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.449 [2024-07-14 09:38:06.832966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.449 [2024-07-14 09:38:06.832982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.449 [2024-07-14 09:38:06.832996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.449 [2024-07-14 09:38:06.833012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.449 [2024-07-14 09:38:06.833027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.449 [2024-07-14 09:38:06.833043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.449 [2024-07-14 09:38:06.833058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.449 [2024-07-14 09:38:06.833074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.449 [2024-07-14 09:38:06.833089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.449 [2024-07-14 09:38:06.833105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.449 [2024-07-14 09:38:06.833119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.449 [2024-07-14 09:38:06.833136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.449 [2024-07-14 09:38:06.833151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.449 [2024-07-14 09:38:06.833179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2942b20 is same with the state(5) to be set 00:28:22.449 [2024-07-14 09:38:06.834718] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:28:22.449 [2024-07-14 09:38:06.834749] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:28:22.449 [2024-07-14 09:38:06.834996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-07-14 09:38:06.835026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x296bb10 with addr=10.0.0.2, port=4420 00:28:22.449 [2024-07-14 09:38:06.835043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x296bb10 is same with the state(5) to be set 00:28:22.449 [2024-07-14 09:38:06.835193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-07-14 09:38:06.835219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x298d3d0 with addr=10.0.0.2, port=4420 00:28:22.449 [2024-07-14 09:38:06.835235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x298d3d0 is same with the state(5) to be set 00:28:22.449 [2024-07-14 09:38:06.835413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-07-14 09:38:06.835449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2b0b350 with addr=10.0.0.2, port=4420 00:28:22.449 [2024-07-14 09:38:06.835465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2b0b350 is same with the state(5) to be set 00:28:22.449 [2024-07-14 09:38:06.835645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.449 [2024-07-14 09:38:06.835669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2994490 with addr=10.0.0.2, port=4420 00:28:22.449 [2024-07-14 09:38:06.835685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2994490 is same with the state(5) to be set 00:28:22.449 [2024-07-14 09:38:06.835710] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2534ee0 (9): Bad file descriptor 00:28:22.449 [2024-07-14 09:38:06.835731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x296d8c0 (9): Bad file descriptor 00:28:22.449 [2024-07-14 09:38:06.835750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x296d370 (9): Bad file descriptor 00:28:22.449 [2024-07-14 09:38:06.835916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.449 [2024-07-14 09:38:06.835940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.449 [2024-07-14 09:38:06.835964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.449 [2024-07-14 09:38:06.835981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.449 [2024-07-14 09:38:06.835999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.449 [2024-07-14 09:38:06.836014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.449 [2024-07-14 09:38:06.836030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.449 [2024-07-14 09:38:06.836045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.449 [2024-07-14 09:38:06.836062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.449 [2024-07-14 09:38:06.836082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.449 [2024-07-14 09:38:06.836100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.449 [2024-07-14 09:38:06.836115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.449 [2024-07-14 09:38:06.836132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.449 [2024-07-14 09:38:06.836148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.449 [2024-07-14 09:38:06.836173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.449 [2024-07-14 09:38:06.836188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.449 [2024-07-14 09:38:06.836206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.449 [2024-07-14 09:38:06.836226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.449 [2024-07-14 09:38:06.836242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.449 [2024-07-14 09:38:06.836257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.449 [2024-07-14 09:38:06.836273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.449 [2024-07-14 09:38:06.836288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.449 [2024-07-14 09:38:06.836305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.449 [2024-07-14 09:38:06.836319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.449 [2024-07-14 09:38:06.836336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.449 [2024-07-14 09:38:06.836351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.449 [2024-07-14 09:38:06.836368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.449 [2024-07-14 09:38:06.836388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.449 [2024-07-14 09:38:06.836404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.449 [2024-07-14 09:38:06.836419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.449 [2024-07-14 09:38:06.836436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.449 [2024-07-14 09:38:06.836451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.449 [2024-07-14 09:38:06.836467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.449 [2024-07-14 09:38:06.836482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.449 [2024-07-14 09:38:06.836499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.449 [2024-07-14 09:38:06.836518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.450 [2024-07-14 09:38:06.836535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.450 [2024-07-14 09:38:06.836550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.450 [2024-07-14 09:38:06.836566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.450 [2024-07-14 09:38:06.836581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.450 [2024-07-14 09:38:06.836597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.450 [2024-07-14 09:38:06.836612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.450 [2024-07-14 09:38:06.836628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.450 [2024-07-14 09:38:06.836643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.450 [2024-07-14 09:38:06.836660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.450 [2024-07-14 09:38:06.836675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.450 [2024-07-14 09:38:06.836692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.450 [2024-07-14 09:38:06.836707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.450 [2024-07-14 09:38:06.836723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.450 [2024-07-14 09:38:06.836737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.450 [2024-07-14 09:38:06.836754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.450 [2024-07-14 09:38:06.836769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.450 [2024-07-14 09:38:06.836785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.450 [2024-07-14 09:38:06.836800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.450 [2024-07-14 09:38:06.836816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.450 [2024-07-14 09:38:06.836831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.450 [2024-07-14 09:38:06.836848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.450 [2024-07-14 09:38:06.836883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.450 [2024-07-14 09:38:06.836901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.450 [2024-07-14 09:38:06.836916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.450 [2024-07-14 09:38:06.836937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.450 [2024-07-14 09:38:06.836952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.450 [2024-07-14 09:38:06.836969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.450 [2024-07-14 09:38:06.836983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.450 [2024-07-14 09:38:06.837000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.450 [2024-07-14 09:38:06.837016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.450 [2024-07-14 09:38:06.837032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.450 [2024-07-14 09:38:06.837048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.450 [2024-07-14 09:38:06.837064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.450 [2024-07-14 09:38:06.837079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.450 [2024-07-14 09:38:06.837096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.450 [2024-07-14 09:38:06.837111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.450 [2024-07-14 09:38:06.837127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.450 [2024-07-14 09:38:06.837142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.450 [2024-07-14 09:38:06.837164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.450 [2024-07-14 09:38:06.837178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.450 [2024-07-14 09:38:06.837195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.450 [2024-07-14 09:38:06.837209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.450 [2024-07-14 09:38:06.837230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.450 [2024-07-14 09:38:06.837244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.450 [2024-07-14 09:38:06.837262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.450 [2024-07-14 09:38:06.837277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.450 [2024-07-14 09:38:06.837294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.450 [2024-07-14 09:38:06.837308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.450 [2024-07-14 09:38:06.837325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.450 [2024-07-14 09:38:06.837344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.450 [2024-07-14 09:38:06.837361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.450 [2024-07-14 09:38:06.837376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.450 [2024-07-14 09:38:06.837393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.450 [2024-07-14 09:38:06.837408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.450 [2024-07-14 09:38:06.837424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.450 [2024-07-14 09:38:06.837439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.450 [2024-07-14 09:38:06.837455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.450 [2024-07-14 09:38:06.837471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.450 [2024-07-14 09:38:06.837487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.450 [2024-07-14 09:38:06.837502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.450 [2024-07-14 09:38:06.837518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.450 [2024-07-14 09:38:06.837532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.450 [2024-07-14 09:38:06.837548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.450 [2024-07-14 09:38:06.837563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.450 [2024-07-14 09:38:06.837579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.450 [2024-07-14 09:38:06.837594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.450 [2024-07-14 09:38:06.837611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.450 [2024-07-14 09:38:06.837626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.450 [2024-07-14 09:38:06.837642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.450 [2024-07-14 09:38:06.837657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.450 [2024-07-14 09:38:06.837673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.450 [2024-07-14 09:38:06.837689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.450 [2024-07-14 09:38:06.837714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.450 [2024-07-14 09:38:06.837729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.450 [2024-07-14 09:38:06.837750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.450 [2024-07-14 09:38:06.837766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.450 [2024-07-14 09:38:06.837784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.450 [2024-07-14 09:38:06.837798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.450 [2024-07-14 09:38:06.837815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.450 [2024-07-14 09:38:06.837829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.450 [2024-07-14 09:38:06.837845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.450 [2024-07-14 09:38:06.837872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.450 [2024-07-14 09:38:06.837892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.450 [2024-07-14 09:38:06.837907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.450 [2024-07-14 09:38:06.837923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.451 [2024-07-14 09:38:06.837938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.451 [2024-07-14 09:38:06.837954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.451 [2024-07-14 09:38:06.837969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.451 [2024-07-14 09:38:06.837986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.451 [2024-07-14 09:38:06.838000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.451 [2024-07-14 09:38:06.838016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.451 [2024-07-14 09:38:06.838032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.451 [2024-07-14 09:38:06.838047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2945400 is same with the state(5) to be set 00:28:22.451 task offset: 27520 on job bdev=Nvme5n1 fails 00:28:22.451 00:28:22.451 Latency(us) 00:28:22.451 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:22.451 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.451 Job: Nvme1n1 ended in about 0.89 seconds with error 00:28:22.451 Verification LBA range: start 0x0 length 0x400 00:28:22.451 Nvme1n1 : 0.89 216.40 13.53 72.13 0.00 219227.59 21554.06 248551.35 00:28:22.451 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.451 Job: Nvme2n1 ended in about 0.89 seconds with error 00:28:22.451 Verification LBA range: start 0x0 length 0x400 00:28:22.451 Nvme2n1 : 0.89 143.73 8.98 71.86 0.00 287367.27 23884.23 250104.79 00:28:22.451 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.451 Job: Nvme3n1 ended in about 0.89 seconds with error 00:28:22.451 Verification LBA range: start 0x0 length 0x400 00:28:22.451 Nvme3n1 : 0.89 71.60 4.47 71.60 0.00 423572.29 23884.23 382147.70 00:28:22.451 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.451 Job: Nvme4n1 ended in about 0.90 seconds with error 00:28:22.451 Verification LBA range: start 0x0 length 0x400 00:28:22.451 Nvme4n1 : 0.90 213.97 13.37 71.32 0.00 207987.86 21748.24 245444.46 00:28:22.451 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.451 Job: Nvme5n1 ended in about 0.87 seconds with error 00:28:22.451 Verification LBA range: start 0x0 length 0x400 00:28:22.451 Nvme5n1 : 0.87 220.83 13.80 73.61 0.00 196430.46 3859.34 228356.55 00:28:22.451 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.451 Job: Nvme6n1 ended in about 0.90 seconds with error 00:28:22.451 Verification LBA range: start 0x0 length 0x400 00:28:22.451 Nvme6n1 : 0.90 147.67 9.23 71.06 0.00 259667.46 22136.60 212822.09 00:28:22.451 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.451 Job: Nvme7n1 ended in about 0.91 seconds with error 00:28:22.451 Verification LBA range: start 0x0 length 0x400 00:28:22.451 Nvme7n1 : 0.91 70.29 4.39 70.29 0.00 395582.58 60196.03 338651.21 00:28:22.451 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.451 Job: Nvme8n1 ended in about 0.87 seconds with error 00:28:22.451 Verification LBA range: start 0x0 length 0x400 00:28:22.451 Nvme8n1 : 0.87 219.82 13.74 73.27 0.00 183966.81 4708.88 234570.33 00:28:22.451 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.451 Job: Nvme9n1 ended in about 0.92 seconds with error 00:28:22.451 Verification LBA range: start 0x0 length 0x400 00:28:22.451 Nvme9n1 : 0.92 139.84 8.74 69.92 0.00 253465.03 26214.40 257872.02 00:28:22.451 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.451 Job: Nvme10n1 ended in about 0.90 seconds with error 00:28:22.451 Verification LBA range: start 0x0 length 0x400 00:28:22.451 Nvme10n1 : 0.90 144.90 9.06 70.79 0.00 239942.17 30680.56 237677.23 00:28:22.451 =================================================================================================================== 00:28:22.451 Total : 1589.05 99.32 715.86 0.00 249700.24 3859.34 382147.70 00:28:22.709 [2024-07-14 09:38:06.870852] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:22.709 [2024-07-14 09:38:06.870943] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:28:22.709 [2024-07-14 09:38:06.871226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.709 [2024-07-14 09:38:06.871273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2affc40 with addr=10.0.0.2, port=4420 00:28:22.709 [2024-07-14 09:38:06.871294] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2affc40 is same with the state(5) to be set 00:28:22.709 [2024-07-14 09:38:06.871464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.709 [2024-07-14 09:38:06.871492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2996400 with addr=10.0.0.2, port=4420 00:28:22.709 [2024-07-14 09:38:06.871509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2996400 is same with the state(5) to be set 00:28:22.709 [2024-07-14 09:38:06.871536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x296bb10 (9): Bad file descriptor 00:28:22.709 [2024-07-14 09:38:06.871560] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x298d3d0 (9): Bad file descriptor 00:28:22.709 [2024-07-14 09:38:06.871579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2b0b350 (9): Bad file descriptor 00:28:22.710 [2024-07-14 09:38:06.871598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2994490 (9): Bad file descriptor 00:28:22.710 [2024-07-14 09:38:06.871625] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.710 [2024-07-14 09:38:06.871640] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.710 [2024-07-14 09:38:06.871656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.710 [2024-07-14 09:38:06.871683] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:28:22.710 [2024-07-14 09:38:06.871698] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:28:22.710 [2024-07-14 09:38:06.871712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:28:22.710 [2024-07-14 09:38:06.871731] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:28:22.710 [2024-07-14 09:38:06.871745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:28:22.710 [2024-07-14 09:38:06.871758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:28:22.710 [2024-07-14 09:38:06.871795] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:22.710 [2024-07-14 09:38:06.871833] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:22.710 [2024-07-14 09:38:06.871857] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:22.710 [2024-07-14 09:38:06.871908] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:22.710 [2024-07-14 09:38:06.871932] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:22.710 [2024-07-14 09:38:06.871952] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:22.710 [2024-07-14 09:38:06.871972] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:22.710 [2024-07-14 09:38:06.872382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.710 [2024-07-14 09:38:06.872407] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.710 [2024-07-14 09:38:06.872420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.710 [2024-07-14 09:38:06.872610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.710 [2024-07-14 09:38:06.872638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2b13030 with addr=10.0.0.2, port=4420 00:28:22.710 [2024-07-14 09:38:06.872655] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2b13030 is same with the state(5) to be set 00:28:22.710 [2024-07-14 09:38:06.872675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2affc40 (9): Bad file descriptor 00:28:22.710 [2024-07-14 09:38:06.872695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2996400 (9): Bad file descriptor 00:28:22.710 [2024-07-14 09:38:06.872712] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:28:22.710 [2024-07-14 09:38:06.872725] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:28:22.710 [2024-07-14 09:38:06.872739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:28:22.710 [2024-07-14 09:38:06.872758] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:28:22.710 [2024-07-14 09:38:06.872773] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:28:22.710 [2024-07-14 09:38:06.872786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:28:22.710 [2024-07-14 09:38:06.872810] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:28:22.710 [2024-07-14 09:38:06.872825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:28:22.710 [2024-07-14 09:38:06.872838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:28:22.710 [2024-07-14 09:38:06.872861] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:28:22.710 [2024-07-14 09:38:06.872886] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:28:22.710 [2024-07-14 09:38:06.872900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:28:22.710 [2024-07-14 09:38:06.872936] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:22.710 [2024-07-14 09:38:06.872961] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:22.710 [2024-07-14 09:38:06.872997] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:22.710 [2024-07-14 09:38:06.873017] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:22.710 [2024-07-14 09:38:06.873035] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:22.710 [2024-07-14 09:38:06.873054] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:22.710 [2024-07-14 09:38:06.873408] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.710 [2024-07-14 09:38:06.873432] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.710 [2024-07-14 09:38:06.873445] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.710 [2024-07-14 09:38:06.873456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.710 [2024-07-14 09:38:06.873483] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2b13030 (9): Bad file descriptor 00:28:22.710 [2024-07-14 09:38:06.873503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:28:22.710 [2024-07-14 09:38:06.873517] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:28:22.710 [2024-07-14 09:38:06.873530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:28:22.710 [2024-07-14 09:38:06.873557] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:28:22.710 [2024-07-14 09:38:06.873572] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:28:22.710 [2024-07-14 09:38:06.873585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:28:22.710 [2024-07-14 09:38:06.873649] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:28:22.710 [2024-07-14 09:38:06.873675] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:28:22.710 [2024-07-14 09:38:06.873691] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.710 [2024-07-14 09:38:06.873707] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.710 [2024-07-14 09:38:06.873720] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.710 [2024-07-14 09:38:06.873752] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:28:22.710 [2024-07-14 09:38:06.873768] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:28:22.710 [2024-07-14 09:38:06.873783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:28:22.710 [2024-07-14 09:38:06.873839] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.710 [2024-07-14 09:38:06.874019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.710 [2024-07-14 09:38:06.874047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x296d370 with addr=10.0.0.2, port=4420 00:28:22.710 [2024-07-14 09:38:06.874063] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x296d370 is same with the state(5) to be set 00:28:22.710 [2024-07-14 09:38:06.874220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.710 [2024-07-14 09:38:06.874246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x296d8c0 with addr=10.0.0.2, port=4420 00:28:22.710 [2024-07-14 09:38:06.874263] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x296d8c0 is same with the state(5) to be set 00:28:22.711 [2024-07-14 09:38:06.874417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.711 [2024-07-14 09:38:06.874443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2534ee0 with addr=10.0.0.2, port=4420 00:28:22.711 [2024-07-14 09:38:06.874460] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2534ee0 is same with the state(5) to be set 00:28:22.711 [2024-07-14 09:38:06.874505] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x296d370 (9): Bad file descriptor 00:28:22.711 [2024-07-14 09:38:06.874530] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x296d8c0 (9): Bad file descriptor 00:28:22.711 [2024-07-14 09:38:06.874549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2534ee0 (9): Bad file descriptor 00:28:22.711 [2024-07-14 09:38:06.874588] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:28:22.711 [2024-07-14 09:38:06.874606] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:28:22.711 [2024-07-14 09:38:06.874620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:28:22.711 [2024-07-14 09:38:06.874637] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:28:22.711 [2024-07-14 09:38:06.874653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:28:22.711 [2024-07-14 09:38:06.874666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:28:22.711 [2024-07-14 09:38:06.874681] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.711 [2024-07-14 09:38:06.874695] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.711 [2024-07-14 09:38:06.874708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.711 [2024-07-14 09:38:06.874745] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.711 [2024-07-14 09:38:06.874762] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.711 [2024-07-14 09:38:06.874774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.970 09:38:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:28:22.970 09:38:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:28:23.908 09:38:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 826535 00:28:23.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (826535) - No such process 00:28:23.908 09:38:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:28:23.908 09:38:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:28:23.908 09:38:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:23.908 09:38:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:23.908 09:38:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:23.908 09:38:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:23.908 09:38:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:23.908 09:38:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:28:23.908 09:38:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:23.908 09:38:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:28:23.908 09:38:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:23.908 09:38:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:23.908 rmmod nvme_tcp 00:28:23.908 rmmod nvme_fabrics 00:28:23.908 rmmod nvme_keyring 00:28:23.908 09:38:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:23.908 09:38:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:28:23.908 09:38:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:28:23.908 09:38:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:28:23.908 09:38:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:23.908 09:38:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:23.908 09:38:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:23.908 09:38:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:23.908 09:38:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:23.908 09:38:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.908 09:38:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:23.908 09:38:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:26.438 09:38:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:26.438 00:28:26.438 real 0m7.554s 00:28:26.438 user 0m18.285s 00:28:26.438 sys 0m1.561s 00:28:26.438 09:38:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:26.438 09:38:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:26.438 ************************************ 00:28:26.438 END TEST nvmf_shutdown_tc3 00:28:26.438 ************************************ 00:28:26.438 09:38:10 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:28:26.438 09:38:10 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:28:26.438 00:28:26.438 real 0m27.187s 00:28:26.438 user 1m15.142s 00:28:26.438 sys 0m6.438s 00:28:26.438 09:38:10 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:26.438 09:38:10 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:26.438 ************************************ 00:28:26.438 END TEST nvmf_shutdown 00:28:26.438 ************************************ 00:28:26.438 09:38:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:26.438 09:38:10 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:28:26.438 09:38:10 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:26.438 09:38:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:26.438 09:38:10 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:28:26.438 09:38:10 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:26.438 09:38:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:26.438 09:38:10 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:28:26.438 09:38:10 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:26.438 09:38:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:26.438 09:38:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:26.438 09:38:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:26.438 ************************************ 00:28:26.438 START TEST nvmf_multicontroller 00:28:26.438 ************************************ 00:28:26.438 09:38:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:26.438 * Looking for test storage... 00:28:26.438 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:26.438 09:38:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:26.438 09:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:28:26.438 09:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:26.438 09:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:26.438 09:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:26.438 09:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:26.438 09:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:26.438 09:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:26.438 09:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:26.438 09:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:26.439 09:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:26.439 09:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:26.439 09:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:26.439 09:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:26.439 09:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:26.439 09:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:26.439 09:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:26.439 09:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:26.439 09:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:26.439 09:38:10 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:26.439 09:38:10 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:26.439 09:38:10 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:26.439 09:38:10 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.439 09:38:10 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.439 09:38:10 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.439 09:38:10 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:28:26.439 09:38:10 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.439 09:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:28:26.439 09:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:26.439 09:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:26.439 09:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:26.439 09:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:26.439 09:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:26.439 09:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:26.439 09:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:26.439 09:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:26.439 09:38:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:26.439 09:38:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:26.439 09:38:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:28:26.439 09:38:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:28:26.439 09:38:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:26.439 09:38:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:28:26.439 09:38:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:28:26.439 09:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:26.439 09:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:26.439 09:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:26.439 09:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:26.439 09:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:26.439 09:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:26.439 09:38:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:26.439 09:38:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:26.439 09:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:26.439 09:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:26.439 09:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:28:26.439 09:38:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:28.339 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:28.339 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:28.339 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:28.339 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:28.339 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:28.340 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:28.340 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:28:28.340 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:28.340 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:28.340 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:28.340 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:28.340 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:28.340 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:28.340 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:28.340 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:28.340 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:28.340 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:28.340 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:28.340 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:28.340 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:28.340 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:28.340 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:28.340 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:28.340 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:28.340 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:28.340 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:28.340 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:28.340 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:28.340 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:28.340 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:28.340 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:28.340 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:28:28.340 00:28:28.340 --- 10.0.0.2 ping statistics --- 00:28:28.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:28.340 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:28:28.340 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:28.340 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:28.340 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:28:28.340 00:28:28.340 --- 10.0.0.1 ping statistics --- 00:28:28.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:28.340 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:28:28.340 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:28.340 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:28:28.340 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:28.340 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:28.340 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:28.340 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:28.340 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:28.340 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:28.340 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:28.340 09:38:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:28:28.340 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:28.340 09:38:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:28.340 09:38:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:28.340 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=829048 00:28:28.340 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:28.340 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 829048 00:28:28.340 09:38:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 829048 ']' 00:28:28.340 09:38:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:28.340 09:38:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:28.340 09:38:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:28.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:28.340 09:38:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:28.340 09:38:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:28.340 [2024-07-14 09:38:12.683324] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:28:28.340 [2024-07-14 09:38:12.683402] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:28.340 EAL: No free 2048 kB hugepages reported on node 1 00:28:28.340 [2024-07-14 09:38:12.747217] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:28.599 [2024-07-14 09:38:12.832081] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:28.599 [2024-07-14 09:38:12.832152] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:28.599 [2024-07-14 09:38:12.832172] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:28.599 [2024-07-14 09:38:12.832189] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:28.599 [2024-07-14 09:38:12.832199] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:28.599 [2024-07-14 09:38:12.832291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:28.599 [2024-07-14 09:38:12.832356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:28.599 [2024-07-14 09:38:12.832368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:28.599 09:38:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:28.599 09:38:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:28:28.599 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:28.599 09:38:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:28.599 09:38:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:28.599 09:38:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:28.599 09:38:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:28.599 09:38:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.599 09:38:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:28.599 [2024-07-14 09:38:12.958668] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:28.599 09:38:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.599 09:38:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:28.599 09:38:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.599 09:38:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:28.599 Malloc0 00:28:28.599 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.599 09:38:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:28.599 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.599 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:28.599 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.599 09:38:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:28.599 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.599 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:28.599 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.599 09:38:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:28.599 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.599 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:28.599 [2024-07-14 09:38:13.023389] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:28.599 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.599 09:38:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:28.599 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.599 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:28.599 [2024-07-14 09:38:13.031308] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:28.599 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.599 09:38:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:28.599 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.599 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:28.858 Malloc1 00:28:28.858 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.858 09:38:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:28:28.858 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.858 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:28.858 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.858 09:38:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:28:28.858 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.858 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:28.858 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.858 09:38:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:28.858 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.858 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:28.858 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.858 09:38:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:28:28.858 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.858 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:28.858 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.858 09:38:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=829076 00:28:28.858 09:38:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:28:28.858 09:38:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:28.858 09:38:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 829076 /var/tmp/bdevperf.sock 00:28:28.858 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 829076 ']' 00:28:28.858 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:28.858 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:28.858 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:28.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:28.858 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:28.858 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:29.115 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:29.115 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:28:29.115 09:38:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:29.115 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.115 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:29.372 NVMe0n1 00:28:29.372 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.372 09:38:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:29.372 09:38:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:28:29.372 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.372 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:29.372 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.372 1 00:28:29.372 09:38:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:29.372 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:29.372 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:29.372 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:29.372 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:29.372 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:29.372 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:29.372 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:29.372 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.372 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:29.372 request: 00:28:29.372 { 00:28:29.372 "name": "NVMe0", 00:28:29.372 "trtype": "tcp", 00:28:29.372 "traddr": "10.0.0.2", 00:28:29.372 "adrfam": "ipv4", 00:28:29.372 "trsvcid": "4420", 00:28:29.372 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:29.372 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:28:29.372 "hostaddr": "10.0.0.2", 00:28:29.372 "hostsvcid": "60000", 00:28:29.372 "prchk_reftag": false, 00:28:29.372 "prchk_guard": false, 00:28:29.372 "hdgst": false, 00:28:29.372 "ddgst": false, 00:28:29.372 "method": "bdev_nvme_attach_controller", 00:28:29.372 "req_id": 1 00:28:29.372 } 00:28:29.372 Got JSON-RPC error response 00:28:29.373 response: 00:28:29.373 { 00:28:29.373 "code": -114, 00:28:29.373 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:29.373 } 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:29.373 request: 00:28:29.373 { 00:28:29.373 "name": "NVMe0", 00:28:29.373 "trtype": "tcp", 00:28:29.373 "traddr": "10.0.0.2", 00:28:29.373 "adrfam": "ipv4", 00:28:29.373 "trsvcid": "4420", 00:28:29.373 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:29.373 "hostaddr": "10.0.0.2", 00:28:29.373 "hostsvcid": "60000", 00:28:29.373 "prchk_reftag": false, 00:28:29.373 "prchk_guard": false, 00:28:29.373 "hdgst": false, 00:28:29.373 "ddgst": false, 00:28:29.373 "method": "bdev_nvme_attach_controller", 00:28:29.373 "req_id": 1 00:28:29.373 } 00:28:29.373 Got JSON-RPC error response 00:28:29.373 response: 00:28:29.373 { 00:28:29.373 "code": -114, 00:28:29.373 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:29.373 } 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:29.373 request: 00:28:29.373 { 00:28:29.373 "name": "NVMe0", 00:28:29.373 "trtype": "tcp", 00:28:29.373 "traddr": "10.0.0.2", 00:28:29.373 "adrfam": "ipv4", 00:28:29.373 "trsvcid": "4420", 00:28:29.373 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:29.373 "hostaddr": "10.0.0.2", 00:28:29.373 "hostsvcid": "60000", 00:28:29.373 "prchk_reftag": false, 00:28:29.373 "prchk_guard": false, 00:28:29.373 "hdgst": false, 00:28:29.373 "ddgst": false, 00:28:29.373 "multipath": "disable", 00:28:29.373 "method": "bdev_nvme_attach_controller", 00:28:29.373 "req_id": 1 00:28:29.373 } 00:28:29.373 Got JSON-RPC error response 00:28:29.373 response: 00:28:29.373 { 00:28:29.373 "code": -114, 00:28:29.373 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:28:29.373 } 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:29.373 request: 00:28:29.373 { 00:28:29.373 "name": "NVMe0", 00:28:29.373 "trtype": "tcp", 00:28:29.373 "traddr": "10.0.0.2", 00:28:29.373 "adrfam": "ipv4", 00:28:29.373 "trsvcid": "4420", 00:28:29.373 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:29.373 "hostaddr": "10.0.0.2", 00:28:29.373 "hostsvcid": "60000", 00:28:29.373 "prchk_reftag": false, 00:28:29.373 "prchk_guard": false, 00:28:29.373 "hdgst": false, 00:28:29.373 "ddgst": false, 00:28:29.373 "multipath": "failover", 00:28:29.373 "method": "bdev_nvme_attach_controller", 00:28:29.373 "req_id": 1 00:28:29.373 } 00:28:29.373 Got JSON-RPC error response 00:28:29.373 response: 00:28:29.373 { 00:28:29.373 "code": -114, 00:28:29.373 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:29.373 } 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:29.373 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.373 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:29.630 00:28:29.630 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.630 09:38:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:29.630 09:38:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:28:29.630 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.630 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:29.630 09:38:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.630 09:38:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:28:29.630 09:38:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:31.040 0 00:28:31.040 09:38:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:28:31.040 09:38:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.040 09:38:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:31.040 09:38:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.040 09:38:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 829076 00:28:31.040 09:38:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 829076 ']' 00:28:31.040 09:38:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 829076 00:28:31.040 09:38:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:28:31.040 09:38:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:31.040 09:38:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 829076 00:28:31.040 09:38:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:31.040 09:38:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:31.040 09:38:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 829076' 00:28:31.040 killing process with pid 829076 00:28:31.040 09:38:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 829076 00:28:31.040 09:38:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 829076 00:28:31.040 09:38:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:31.040 09:38:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.040 09:38:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:31.040 09:38:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.040 09:38:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:31.040 09:38:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.040 09:38:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:31.040 09:38:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.040 09:38:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:28:31.040 09:38:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:31.040 09:38:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:28:31.040 09:38:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:28:31.040 09:38:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:28:31.040 09:38:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:28:31.040 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:31.040 [2024-07-14 09:38:13.128279] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:28:31.041 [2024-07-14 09:38:13.128383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid829076 ] 00:28:31.041 EAL: No free 2048 kB hugepages reported on node 1 00:28:31.041 [2024-07-14 09:38:13.189344] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:31.041 [2024-07-14 09:38:13.274564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:31.041 [2024-07-14 09:38:13.941491] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 8512b1a1-5378-478c-9f93-0b6307eaa551 already exists 00:28:31.041 [2024-07-14 09:38:13.941535] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:8512b1a1-5378-478c-9f93-0b6307eaa551 alias for bdev NVMe1n1 00:28:31.041 [2024-07-14 09:38:13.941550] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:28:31.041 Running I/O for 1 seconds... 00:28:31.041 00:28:31.041 Latency(us) 00:28:31.041 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:31.041 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:28:31.041 NVMe0n1 : 1.01 18666.73 72.92 0.00 0.00 6838.36 2087.44 9903.22 00:28:31.041 =================================================================================================================== 00:28:31.041 Total : 18666.73 72.92 0.00 0.00 6838.36 2087.44 9903.22 00:28:31.041 Received shutdown signal, test time was about 1.000000 seconds 00:28:31.041 00:28:31.041 Latency(us) 00:28:31.041 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:31.041 =================================================================================================================== 00:28:31.041 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:31.041 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:31.041 09:38:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:31.041 09:38:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:28:31.041 09:38:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:28:31.041 09:38:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:31.041 09:38:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:28:31.041 09:38:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:31.041 09:38:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:28:31.041 09:38:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:31.041 09:38:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:31.041 rmmod nvme_tcp 00:28:31.041 rmmod nvme_fabrics 00:28:31.041 rmmod nvme_keyring 00:28:31.041 09:38:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:31.041 09:38:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:28:31.041 09:38:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:28:31.041 09:38:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 829048 ']' 00:28:31.041 09:38:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 829048 00:28:31.041 09:38:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 829048 ']' 00:28:31.041 09:38:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 829048 00:28:31.041 09:38:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:28:31.041 09:38:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:31.041 09:38:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 829048 00:28:31.041 09:38:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:31.041 09:38:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:31.041 09:38:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 829048' 00:28:31.041 killing process with pid 829048 00:28:31.041 09:38:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 829048 00:28:31.041 09:38:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 829048 00:28:31.608 09:38:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:31.608 09:38:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:31.608 09:38:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:31.608 09:38:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:31.608 09:38:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:31.608 09:38:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:31.608 09:38:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:31.608 09:38:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.504 09:38:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:33.504 00:28:33.504 real 0m7.330s 00:28:33.504 user 0m11.134s 00:28:33.504 sys 0m2.472s 00:28:33.504 09:38:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:33.504 09:38:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:33.504 ************************************ 00:28:33.504 END TEST nvmf_multicontroller 00:28:33.504 ************************************ 00:28:33.504 09:38:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:33.504 09:38:17 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:33.504 09:38:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:33.504 09:38:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:33.504 09:38:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:33.504 ************************************ 00:28:33.504 START TEST nvmf_aer 00:28:33.504 ************************************ 00:28:33.504 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:33.504 * Looking for test storage... 00:28:33.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:33.504 09:38:17 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:33.504 09:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:28:33.504 09:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:33.504 09:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:33.504 09:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:33.504 09:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:33.504 09:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:33.504 09:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:33.504 09:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:33.504 09:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:33.504 09:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:33.504 09:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:33.504 09:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:33.504 09:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:33.504 09:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:33.504 09:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:33.504 09:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:33.504 09:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:33.504 09:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:33.504 09:38:17 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:33.504 09:38:17 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:33.504 09:38:17 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:33.504 09:38:17 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.504 09:38:17 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.504 09:38:17 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.504 09:38:17 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:28:33.504 09:38:17 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.504 09:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:28:33.504 09:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:33.504 09:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:33.504 09:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:33.504 09:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:33.504 09:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:33.504 09:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:33.504 09:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:33.504 09:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:33.504 09:38:17 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:28:33.504 09:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:33.504 09:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:33.504 09:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:33.504 09:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:33.504 09:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:33.504 09:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:33.504 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:33.504 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.504 09:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:33.504 09:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:33.504 09:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:28:33.504 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:36.030 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:36.030 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:36.030 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:36.031 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:36.031 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:36.031 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:36.031 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:36.031 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:36.031 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:36.031 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:36.031 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:36.031 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:36.031 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:36.031 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:36.031 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:36.031 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:36.031 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:36.031 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:36.031 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:36.031 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:36.031 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:36.031 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:28:36.031 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:36.031 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:36.031 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:36.031 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:36.031 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:36.031 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:36.031 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:36.031 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:36.031 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:36.031 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:36.031 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:36.031 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:36.031 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:36.031 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:36.031 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:36.031 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:36.031 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:36.031 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:36.031 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:36.031 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:36.031 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:36.031 09:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:36.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:36.031 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:28:36.031 00:28:36.031 --- 10.0.0.2 ping statistics --- 00:28:36.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:36.031 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:36.031 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:36.031 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:28:36.031 00:28:36.031 --- 10.0.0.1 ping statistics --- 00:28:36.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:36.031 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=831281 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 831281 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 831281 ']' 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:36.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:36.031 [2024-07-14 09:38:20.102613] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:28:36.031 [2024-07-14 09:38:20.102697] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:36.031 EAL: No free 2048 kB hugepages reported on node 1 00:28:36.031 [2024-07-14 09:38:20.176791] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:36.031 [2024-07-14 09:38:20.268537] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:36.031 [2024-07-14 09:38:20.268599] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:36.031 [2024-07-14 09:38:20.268625] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:36.031 [2024-07-14 09:38:20.268638] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:36.031 [2024-07-14 09:38:20.268650] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:36.031 [2024-07-14 09:38:20.268744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:36.031 [2024-07-14 09:38:20.268811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:36.031 [2024-07-14 09:38:20.268907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:36.031 [2024-07-14 09:38:20.268910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:36.031 [2024-07-14 09:38:20.405576] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:36.031 Malloc0 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:36.031 [2024-07-14 09:38:20.457066] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.031 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:36.031 [ 00:28:36.031 { 00:28:36.031 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:36.031 "subtype": "Discovery", 00:28:36.031 "listen_addresses": [], 00:28:36.031 "allow_any_host": true, 00:28:36.031 "hosts": [] 00:28:36.031 }, 00:28:36.031 { 00:28:36.031 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:36.031 "subtype": "NVMe", 00:28:36.031 "listen_addresses": [ 00:28:36.031 { 00:28:36.031 "trtype": "TCP", 00:28:36.031 "adrfam": "IPv4", 00:28:36.031 "traddr": "10.0.0.2", 00:28:36.031 "trsvcid": "4420" 00:28:36.031 } 00:28:36.031 ], 00:28:36.031 "allow_any_host": true, 00:28:36.031 "hosts": [], 00:28:36.031 "serial_number": "SPDK00000000000001", 00:28:36.031 "model_number": "SPDK bdev Controller", 00:28:36.031 "max_namespaces": 2, 00:28:36.031 "min_cntlid": 1, 00:28:36.031 "max_cntlid": 65519, 00:28:36.032 "namespaces": [ 00:28:36.032 { 00:28:36.032 "nsid": 1, 00:28:36.032 "bdev_name": "Malloc0", 00:28:36.032 "name": "Malloc0", 00:28:36.032 "nguid": "72DE65FF1CE241229B9F614D43A2BFDF", 00:28:36.032 "uuid": "72de65ff-1ce2-4122-9b9f-614d43a2bfdf" 00:28:36.032 } 00:28:36.032 ] 00:28:36.032 } 00:28:36.032 ] 00:28:36.032 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.032 09:38:20 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:28:36.032 09:38:20 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:28:36.032 09:38:20 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=831427 00:28:36.032 09:38:20 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:28:36.032 09:38:20 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:28:36.032 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:28:36.032 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:36.032 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:28:36.032 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:28:36.032 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:28:36.289 EAL: No free 2048 kB hugepages reported on node 1 00:28:36.289 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:36.289 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:28:36.289 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:28:36.289 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:28:36.289 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:36.289 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:28:36.289 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:28:36.289 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:36.547 Malloc1 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:36.547 Asynchronous Event Request test 00:28:36.547 Attaching to 10.0.0.2 00:28:36.547 Attached to 10.0.0.2 00:28:36.547 Registering asynchronous event callbacks... 00:28:36.547 Starting namespace attribute notice tests for all controllers... 00:28:36.547 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:28:36.547 aer_cb - Changed Namespace 00:28:36.547 Cleaning up... 00:28:36.547 [ 00:28:36.547 { 00:28:36.547 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:36.547 "subtype": "Discovery", 00:28:36.547 "listen_addresses": [], 00:28:36.547 "allow_any_host": true, 00:28:36.547 "hosts": [] 00:28:36.547 }, 00:28:36.547 { 00:28:36.547 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:36.547 "subtype": "NVMe", 00:28:36.547 "listen_addresses": [ 00:28:36.547 { 00:28:36.547 "trtype": "TCP", 00:28:36.547 "adrfam": "IPv4", 00:28:36.547 "traddr": "10.0.0.2", 00:28:36.547 "trsvcid": "4420" 00:28:36.547 } 00:28:36.547 ], 00:28:36.547 "allow_any_host": true, 00:28:36.547 "hosts": [], 00:28:36.547 "serial_number": "SPDK00000000000001", 00:28:36.547 "model_number": "SPDK bdev Controller", 00:28:36.547 "max_namespaces": 2, 00:28:36.547 "min_cntlid": 1, 00:28:36.547 "max_cntlid": 65519, 00:28:36.547 "namespaces": [ 00:28:36.547 { 00:28:36.547 "nsid": 1, 00:28:36.547 "bdev_name": "Malloc0", 00:28:36.547 "name": "Malloc0", 00:28:36.547 "nguid": "72DE65FF1CE241229B9F614D43A2BFDF", 00:28:36.547 "uuid": "72de65ff-1ce2-4122-9b9f-614d43a2bfdf" 00:28:36.547 }, 00:28:36.547 { 00:28:36.547 "nsid": 2, 00:28:36.547 "bdev_name": "Malloc1", 00:28:36.547 "name": "Malloc1", 00:28:36.547 "nguid": "6A88478DAC8C4449834942C02E4465AC", 00:28:36.547 "uuid": "6a88478d-ac8c-4449-8349-42c02e4465ac" 00:28:36.547 } 00:28:36.547 ] 00:28:36.547 } 00:28:36.547 ] 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 831427 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:36.547 rmmod nvme_tcp 00:28:36.547 rmmod nvme_fabrics 00:28:36.547 rmmod nvme_keyring 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 831281 ']' 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 831281 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 831281 ']' 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 831281 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 831281 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 831281' 00:28:36.547 killing process with pid 831281 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 831281 00:28:36.547 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 831281 00:28:36.806 09:38:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:36.806 09:38:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:36.806 09:38:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:36.806 09:38:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:36.806 09:38:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:36.806 09:38:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:36.806 09:38:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:36.806 09:38:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:39.337 09:38:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:39.337 00:28:39.337 real 0m5.393s 00:28:39.337 user 0m4.388s 00:28:39.337 sys 0m1.904s 00:28:39.337 09:38:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:39.337 09:38:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:39.337 ************************************ 00:28:39.337 END TEST nvmf_aer 00:28:39.337 ************************************ 00:28:39.337 09:38:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:39.337 09:38:23 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:39.337 09:38:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:39.337 09:38:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:39.337 09:38:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:39.337 ************************************ 00:28:39.337 START TEST nvmf_async_init 00:28:39.337 ************************************ 00:28:39.337 09:38:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:39.337 * Looking for test storage... 00:28:39.337 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:39.337 09:38:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:39.337 09:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:28:39.337 09:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:39.337 09:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:39.337 09:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:39.337 09:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:39.337 09:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:39.337 09:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:39.337 09:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:39.337 09:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:39.337 09:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:39.337 09:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:39.337 09:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:39.337 09:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:39.337 09:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:39.337 09:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:39.337 09:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:39.337 09:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:39.337 09:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:39.337 09:38:23 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:39.337 09:38:23 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:39.337 09:38:23 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:39.337 09:38:23 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.337 09:38:23 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.337 09:38:23 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.337 09:38:23 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:28:39.338 09:38:23 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.338 09:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:28:39.338 09:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:39.338 09:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:39.338 09:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:39.338 09:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:39.338 09:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:39.338 09:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:39.338 09:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:39.338 09:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:39.338 09:38:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:28:39.338 09:38:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:28:39.338 09:38:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:28:39.338 09:38:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:28:39.338 09:38:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:28:39.338 09:38:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:28:39.338 09:38:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=0844d227e6854444852a489309b0336f 00:28:39.338 09:38:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:28:39.338 09:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:39.338 09:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:39.338 09:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:39.338 09:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:39.338 09:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:39.338 09:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:39.338 09:38:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:39.338 09:38:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:39.338 09:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:39.338 09:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:39.338 09:38:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:28:39.338 09:38:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:41.240 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:41.240 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:28:41.240 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:41.240 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:41.240 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:41.240 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:41.240 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:41.240 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:28:41.240 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:41.240 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:28:41.240 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:28:41.240 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:28:41.240 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:28:41.240 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:28:41.240 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:28:41.240 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:41.240 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:41.240 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:41.240 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:41.240 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:41.240 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:41.240 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:41.240 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:41.240 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:41.240 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:41.240 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:41.240 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:41.240 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:41.240 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:41.240 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:41.240 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:41.240 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:41.240 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:41.240 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:41.240 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:41.240 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:41.240 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:41.240 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:41.240 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:41.240 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:41.240 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:41.240 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:41.240 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:41.240 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:41.240 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:41.241 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:41.241 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:41.241 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:41.241 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:28:41.241 00:28:41.241 --- 10.0.0.2 ping statistics --- 00:28:41.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:41.241 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:41.241 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:41.241 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:28:41.241 00:28:41.241 --- 10.0.0.1 ping statistics --- 00:28:41.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:41.241 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=833363 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 833363 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 833363 ']' 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:41.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:41.241 09:38:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:41.241 [2024-07-14 09:38:25.553588] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:28:41.241 [2024-07-14 09:38:25.553687] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:41.241 EAL: No free 2048 kB hugepages reported on node 1 00:28:41.241 [2024-07-14 09:38:25.621445] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.500 [2024-07-14 09:38:25.711024] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:41.500 [2024-07-14 09:38:25.711082] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:41.500 [2024-07-14 09:38:25.711106] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:41.500 [2024-07-14 09:38:25.711120] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:41.500 [2024-07-14 09:38:25.711131] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:41.500 [2024-07-14 09:38:25.711172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:41.500 09:38:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:41.500 09:38:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:28:41.500 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:41.500 09:38:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:41.500 09:38:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:41.500 09:38:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:41.500 09:38:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:41.500 09:38:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.500 09:38:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:41.500 [2024-07-14 09:38:25.863854] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:41.500 09:38:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.500 09:38:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:28:41.500 09:38:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.500 09:38:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:41.500 null0 00:28:41.500 09:38:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.500 09:38:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:28:41.500 09:38:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.500 09:38:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:41.500 09:38:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.500 09:38:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:28:41.500 09:38:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.500 09:38:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:41.500 09:38:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.500 09:38:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 0844d227e6854444852a489309b0336f 00:28:41.500 09:38:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.500 09:38:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:41.500 09:38:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.500 09:38:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:41.500 09:38:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.500 09:38:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:41.500 [2024-07-14 09:38:25.904099] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:41.500 09:38:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.500 09:38:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:28:41.500 09:38:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.500 09:38:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:41.758 nvme0n1 00:28:41.758 09:38:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.758 09:38:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:41.758 09:38:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.758 09:38:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:41.758 [ 00:28:41.758 { 00:28:41.758 "name": "nvme0n1", 00:28:41.758 "aliases": [ 00:28:41.758 "0844d227-e685-4444-852a-489309b0336f" 00:28:41.758 ], 00:28:41.758 "product_name": "NVMe disk", 00:28:41.758 "block_size": 512, 00:28:41.758 "num_blocks": 2097152, 00:28:41.758 "uuid": "0844d227-e685-4444-852a-489309b0336f", 00:28:41.758 "assigned_rate_limits": { 00:28:41.758 "rw_ios_per_sec": 0, 00:28:41.758 "rw_mbytes_per_sec": 0, 00:28:41.758 "r_mbytes_per_sec": 0, 00:28:41.758 "w_mbytes_per_sec": 0 00:28:41.758 }, 00:28:41.758 "claimed": false, 00:28:41.758 "zoned": false, 00:28:41.758 "supported_io_types": { 00:28:41.758 "read": true, 00:28:41.758 "write": true, 00:28:41.758 "unmap": false, 00:28:41.758 "flush": true, 00:28:41.758 "reset": true, 00:28:41.758 "nvme_admin": true, 00:28:41.758 "nvme_io": true, 00:28:41.758 "nvme_io_md": false, 00:28:41.758 "write_zeroes": true, 00:28:41.758 "zcopy": false, 00:28:41.758 "get_zone_info": false, 00:28:41.758 "zone_management": false, 00:28:41.758 "zone_append": false, 00:28:41.758 "compare": true, 00:28:41.758 "compare_and_write": true, 00:28:41.758 "abort": true, 00:28:41.758 "seek_hole": false, 00:28:41.758 "seek_data": false, 00:28:41.758 "copy": true, 00:28:41.758 "nvme_iov_md": false 00:28:41.758 }, 00:28:41.758 "memory_domains": [ 00:28:41.758 { 00:28:41.758 "dma_device_id": "system", 00:28:41.758 "dma_device_type": 1 00:28:41.758 } 00:28:41.758 ], 00:28:41.758 "driver_specific": { 00:28:41.758 "nvme": [ 00:28:41.758 { 00:28:41.758 "trid": { 00:28:41.758 "trtype": "TCP", 00:28:41.758 "adrfam": "IPv4", 00:28:41.758 "traddr": "10.0.0.2", 00:28:41.758 "trsvcid": "4420", 00:28:41.758 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:41.758 }, 00:28:41.758 "ctrlr_data": { 00:28:41.758 "cntlid": 1, 00:28:41.758 "vendor_id": "0x8086", 00:28:41.758 "model_number": "SPDK bdev Controller", 00:28:41.758 "serial_number": "00000000000000000000", 00:28:41.758 "firmware_revision": "24.09", 00:28:41.758 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:41.758 "oacs": { 00:28:41.758 "security": 0, 00:28:41.758 "format": 0, 00:28:41.758 "firmware": 0, 00:28:41.758 "ns_manage": 0 00:28:41.758 }, 00:28:41.758 "multi_ctrlr": true, 00:28:41.758 "ana_reporting": false 00:28:41.758 }, 00:28:41.758 "vs": { 00:28:41.758 "nvme_version": "1.3" 00:28:41.758 }, 00:28:41.758 "ns_data": { 00:28:41.758 "id": 1, 00:28:41.758 "can_share": true 00:28:41.758 } 00:28:41.758 } 00:28:41.758 ], 00:28:41.758 "mp_policy": "active_passive" 00:28:41.758 } 00:28:41.758 } 00:28:41.758 ] 00:28:41.758 09:38:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.758 09:38:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:28:41.758 09:38:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.758 09:38:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:41.758 [2024-07-14 09:38:26.157412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:41.758 [2024-07-14 09:38:26.157505] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c0c40 (9): Bad file descriptor 00:28:42.017 [2024-07-14 09:38:26.290028] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:42.017 09:38:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.017 09:38:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:42.017 09:38:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.017 09:38:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:42.017 [ 00:28:42.017 { 00:28:42.017 "name": "nvme0n1", 00:28:42.017 "aliases": [ 00:28:42.017 "0844d227-e685-4444-852a-489309b0336f" 00:28:42.017 ], 00:28:42.017 "product_name": "NVMe disk", 00:28:42.017 "block_size": 512, 00:28:42.017 "num_blocks": 2097152, 00:28:42.017 "uuid": "0844d227-e685-4444-852a-489309b0336f", 00:28:42.017 "assigned_rate_limits": { 00:28:42.017 "rw_ios_per_sec": 0, 00:28:42.017 "rw_mbytes_per_sec": 0, 00:28:42.017 "r_mbytes_per_sec": 0, 00:28:42.017 "w_mbytes_per_sec": 0 00:28:42.017 }, 00:28:42.017 "claimed": false, 00:28:42.017 "zoned": false, 00:28:42.017 "supported_io_types": { 00:28:42.017 "read": true, 00:28:42.017 "write": true, 00:28:42.017 "unmap": false, 00:28:42.017 "flush": true, 00:28:42.017 "reset": true, 00:28:42.017 "nvme_admin": true, 00:28:42.017 "nvme_io": true, 00:28:42.017 "nvme_io_md": false, 00:28:42.017 "write_zeroes": true, 00:28:42.017 "zcopy": false, 00:28:42.017 "get_zone_info": false, 00:28:42.017 "zone_management": false, 00:28:42.017 "zone_append": false, 00:28:42.017 "compare": true, 00:28:42.017 "compare_and_write": true, 00:28:42.017 "abort": true, 00:28:42.017 "seek_hole": false, 00:28:42.017 "seek_data": false, 00:28:42.017 "copy": true, 00:28:42.017 "nvme_iov_md": false 00:28:42.017 }, 00:28:42.017 "memory_domains": [ 00:28:42.017 { 00:28:42.017 "dma_device_id": "system", 00:28:42.017 "dma_device_type": 1 00:28:42.017 } 00:28:42.017 ], 00:28:42.017 "driver_specific": { 00:28:42.017 "nvme": [ 00:28:42.017 { 00:28:42.017 "trid": { 00:28:42.017 "trtype": "TCP", 00:28:42.017 "adrfam": "IPv4", 00:28:42.017 "traddr": "10.0.0.2", 00:28:42.017 "trsvcid": "4420", 00:28:42.017 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:42.017 }, 00:28:42.017 "ctrlr_data": { 00:28:42.017 "cntlid": 2, 00:28:42.017 "vendor_id": "0x8086", 00:28:42.017 "model_number": "SPDK bdev Controller", 00:28:42.017 "serial_number": "00000000000000000000", 00:28:42.017 "firmware_revision": "24.09", 00:28:42.017 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:42.017 "oacs": { 00:28:42.017 "security": 0, 00:28:42.017 "format": 0, 00:28:42.017 "firmware": 0, 00:28:42.017 "ns_manage": 0 00:28:42.017 }, 00:28:42.017 "multi_ctrlr": true, 00:28:42.017 "ana_reporting": false 00:28:42.017 }, 00:28:42.017 "vs": { 00:28:42.017 "nvme_version": "1.3" 00:28:42.017 }, 00:28:42.017 "ns_data": { 00:28:42.017 "id": 1, 00:28:42.017 "can_share": true 00:28:42.017 } 00:28:42.017 } 00:28:42.017 ], 00:28:42.017 "mp_policy": "active_passive" 00:28:42.017 } 00:28:42.017 } 00:28:42.017 ] 00:28:42.017 09:38:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.017 09:38:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.017 09:38:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.017 09:38:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:42.017 09:38:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.017 09:38:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:28:42.017 09:38:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.cStqVW4bF9 00:28:42.017 09:38:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:42.017 09:38:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.cStqVW4bF9 00:28:42.017 09:38:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:28:42.017 09:38:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.017 09:38:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:42.017 09:38:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.017 09:38:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:28:42.017 09:38:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.017 09:38:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:42.017 [2024-07-14 09:38:26.342069] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:42.017 [2024-07-14 09:38:26.342262] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:42.017 09:38:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.017 09:38:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.cStqVW4bF9 00:28:42.017 09:38:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.017 09:38:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:42.017 [2024-07-14 09:38:26.350072] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:42.017 09:38:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.017 09:38:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.cStqVW4bF9 00:28:42.017 09:38:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.017 09:38:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:42.017 [2024-07-14 09:38:26.358103] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:42.017 [2024-07-14 09:38:26.358183] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:28:42.017 nvme0n1 00:28:42.017 09:38:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.017 09:38:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:42.017 09:38:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.017 09:38:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:42.017 [ 00:28:42.017 { 00:28:42.017 "name": "nvme0n1", 00:28:42.017 "aliases": [ 00:28:42.017 "0844d227-e685-4444-852a-489309b0336f" 00:28:42.017 ], 00:28:42.017 "product_name": "NVMe disk", 00:28:42.017 "block_size": 512, 00:28:42.017 "num_blocks": 2097152, 00:28:42.017 "uuid": "0844d227-e685-4444-852a-489309b0336f", 00:28:42.017 "assigned_rate_limits": { 00:28:42.017 "rw_ios_per_sec": 0, 00:28:42.017 "rw_mbytes_per_sec": 0, 00:28:42.017 "r_mbytes_per_sec": 0, 00:28:42.018 "w_mbytes_per_sec": 0 00:28:42.018 }, 00:28:42.018 "claimed": false, 00:28:42.018 "zoned": false, 00:28:42.018 "supported_io_types": { 00:28:42.018 "read": true, 00:28:42.018 "write": true, 00:28:42.018 "unmap": false, 00:28:42.018 "flush": true, 00:28:42.018 "reset": true, 00:28:42.018 "nvme_admin": true, 00:28:42.018 "nvme_io": true, 00:28:42.018 "nvme_io_md": false, 00:28:42.018 "write_zeroes": true, 00:28:42.018 "zcopy": false, 00:28:42.018 "get_zone_info": false, 00:28:42.018 "zone_management": false, 00:28:42.018 "zone_append": false, 00:28:42.018 "compare": true, 00:28:42.018 "compare_and_write": true, 00:28:42.018 "abort": true, 00:28:42.018 "seek_hole": false, 00:28:42.018 "seek_data": false, 00:28:42.018 "copy": true, 00:28:42.018 "nvme_iov_md": false 00:28:42.018 }, 00:28:42.018 "memory_domains": [ 00:28:42.018 { 00:28:42.018 "dma_device_id": "system", 00:28:42.018 "dma_device_type": 1 00:28:42.018 } 00:28:42.018 ], 00:28:42.018 "driver_specific": { 00:28:42.018 "nvme": [ 00:28:42.018 { 00:28:42.018 "trid": { 00:28:42.018 "trtype": "TCP", 00:28:42.018 "adrfam": "IPv4", 00:28:42.018 "traddr": "10.0.0.2", 00:28:42.018 "trsvcid": "4421", 00:28:42.018 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:42.018 }, 00:28:42.018 "ctrlr_data": { 00:28:42.018 "cntlid": 3, 00:28:42.018 "vendor_id": "0x8086", 00:28:42.018 "model_number": "SPDK bdev Controller", 00:28:42.018 "serial_number": "00000000000000000000", 00:28:42.018 "firmware_revision": "24.09", 00:28:42.018 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:42.018 "oacs": { 00:28:42.018 "security": 0, 00:28:42.018 "format": 0, 00:28:42.018 "firmware": 0, 00:28:42.018 "ns_manage": 0 00:28:42.018 }, 00:28:42.018 "multi_ctrlr": true, 00:28:42.018 "ana_reporting": false 00:28:42.018 }, 00:28:42.018 "vs": { 00:28:42.018 "nvme_version": "1.3" 00:28:42.018 }, 00:28:42.018 "ns_data": { 00:28:42.018 "id": 1, 00:28:42.018 "can_share": true 00:28:42.018 } 00:28:42.018 } 00:28:42.018 ], 00:28:42.018 "mp_policy": "active_passive" 00:28:42.018 } 00:28:42.018 } 00:28:42.018 ] 00:28:42.018 09:38:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.018 09:38:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.018 09:38:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.018 09:38:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:42.018 09:38:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.018 09:38:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.cStqVW4bF9 00:28:42.018 09:38:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:28:42.018 09:38:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:28:42.018 09:38:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:42.018 09:38:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:28:42.018 09:38:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:42.018 09:38:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:28:42.018 09:38:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:42.018 09:38:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:42.276 rmmod nvme_tcp 00:28:42.276 rmmod nvme_fabrics 00:28:42.276 rmmod nvme_keyring 00:28:42.276 09:38:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:42.276 09:38:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:28:42.276 09:38:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:28:42.276 09:38:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 833363 ']' 00:28:42.276 09:38:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 833363 00:28:42.276 09:38:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 833363 ']' 00:28:42.276 09:38:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 833363 00:28:42.276 09:38:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:28:42.276 09:38:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:42.276 09:38:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 833363 00:28:42.276 09:38:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:42.276 09:38:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:42.276 09:38:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 833363' 00:28:42.276 killing process with pid 833363 00:28:42.276 09:38:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 833363 00:28:42.276 [2024-07-14 09:38:26.545032] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:28:42.276 [2024-07-14 09:38:26.545072] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:42.276 09:38:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 833363 00:28:42.535 09:38:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:42.535 09:38:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:42.535 09:38:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:42.535 09:38:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:42.535 09:38:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:42.535 09:38:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:42.535 09:38:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:42.535 09:38:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:44.437 09:38:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:44.437 00:28:44.437 real 0m5.457s 00:28:44.437 user 0m2.066s 00:28:44.437 sys 0m1.762s 00:28:44.437 09:38:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:44.437 09:38:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:44.437 ************************************ 00:28:44.437 END TEST nvmf_async_init 00:28:44.437 ************************************ 00:28:44.437 09:38:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:44.437 09:38:28 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:44.437 09:38:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:44.437 09:38:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:44.437 09:38:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:44.437 ************************************ 00:28:44.437 START TEST dma 00:28:44.437 ************************************ 00:28:44.437 09:38:28 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:44.437 * Looking for test storage... 00:28:44.437 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:44.437 09:38:28 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:44.437 09:38:28 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:28:44.696 09:38:28 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:44.696 09:38:28 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:44.696 09:38:28 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:44.696 09:38:28 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:44.696 09:38:28 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:44.696 09:38:28 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:44.696 09:38:28 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:44.696 09:38:28 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:44.696 09:38:28 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:44.696 09:38:28 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:44.696 09:38:28 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:44.696 09:38:28 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:44.696 09:38:28 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:44.696 09:38:28 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:44.696 09:38:28 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:44.696 09:38:28 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:44.696 09:38:28 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:44.696 09:38:28 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:44.696 09:38:28 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:44.696 09:38:28 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:44.696 09:38:28 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.696 09:38:28 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.696 09:38:28 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.696 09:38:28 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:28:44.696 09:38:28 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.696 09:38:28 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:28:44.696 09:38:28 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:44.696 09:38:28 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:44.696 09:38:28 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:44.696 09:38:28 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:44.696 09:38:28 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:44.696 09:38:28 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:44.696 09:38:28 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:44.696 09:38:28 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:44.696 09:38:28 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:28:44.696 09:38:28 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:28:44.696 00:28:44.696 real 0m0.071s 00:28:44.696 user 0m0.031s 00:28:44.696 sys 0m0.046s 00:28:44.696 09:38:28 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:44.696 09:38:28 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:28:44.696 ************************************ 00:28:44.696 END TEST dma 00:28:44.696 ************************************ 00:28:44.696 09:38:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:44.696 09:38:28 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:44.697 09:38:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:44.697 09:38:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:44.697 09:38:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:44.697 ************************************ 00:28:44.697 START TEST nvmf_identify 00:28:44.697 ************************************ 00:28:44.697 09:38:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:44.697 * Looking for test storage... 00:28:44.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:28:44.697 09:38:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:46.598 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:46.598 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:28:46.598 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:46.598 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:46.598 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:46.598 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:46.598 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:46.598 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:46.599 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:46.599 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:46.599 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:46.599 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:46.599 09:38:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:46.599 09:38:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:46.858 09:38:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:46.858 09:38:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:46.858 09:38:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:46.858 09:38:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:46.858 09:38:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:46.858 09:38:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:46.858 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:46.858 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:28:46.858 00:28:46.858 --- 10.0.0.2 ping statistics --- 00:28:46.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:46.858 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:28:46.858 09:38:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:46.858 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:46.858 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:28:46.858 00:28:46.858 --- 10.0.0.1 ping statistics --- 00:28:46.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:46.858 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:28:46.858 09:38:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:46.858 09:38:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:28:46.858 09:38:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:46.858 09:38:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:46.858 09:38:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:46.858 09:38:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:46.858 09:38:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:46.858 09:38:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:46.858 09:38:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:46.858 09:38:31 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:28:46.858 09:38:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:46.858 09:38:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:46.858 09:38:31 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=835487 00:28:46.858 09:38:31 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:46.858 09:38:31 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:46.858 09:38:31 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 835487 00:28:46.858 09:38:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 835487 ']' 00:28:46.858 09:38:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:46.858 09:38:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:46.858 09:38:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:46.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:46.858 09:38:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:46.858 09:38:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:46.858 [2024-07-14 09:38:31.207445] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:28:46.858 [2024-07-14 09:38:31.207540] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:46.858 EAL: No free 2048 kB hugepages reported on node 1 00:28:46.858 [2024-07-14 09:38:31.274118] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:47.116 [2024-07-14 09:38:31.366776] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:47.116 [2024-07-14 09:38:31.366837] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:47.116 [2024-07-14 09:38:31.366875] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:47.116 [2024-07-14 09:38:31.366888] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:47.116 [2024-07-14 09:38:31.366899] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:47.116 [2024-07-14 09:38:31.367255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:47.116 [2024-07-14 09:38:31.367315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:47.116 [2024-07-14 09:38:31.367384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:47.116 [2024-07-14 09:38:31.367386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:47.116 09:38:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:47.116 09:38:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:28:47.116 09:38:31 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:47.116 09:38:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.116 09:38:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:47.116 [2024-07-14 09:38:31.495656] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:47.116 09:38:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.116 09:38:31 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:28:47.116 09:38:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:47.116 09:38:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:47.116 09:38:31 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:47.116 09:38:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.116 09:38:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:47.116 Malloc0 00:28:47.116 09:38:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.116 09:38:31 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:47.116 09:38:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.116 09:38:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:47.116 09:38:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.116 09:38:31 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:28:47.116 09:38:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.116 09:38:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:47.436 09:38:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.436 09:38:31 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:47.436 09:38:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.436 09:38:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:47.436 [2024-07-14 09:38:31.577237] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:47.436 09:38:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.436 09:38:31 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:47.436 09:38:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.436 09:38:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:47.436 09:38:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.436 09:38:31 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:28:47.436 09:38:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.436 09:38:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:47.436 [ 00:28:47.436 { 00:28:47.436 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:47.436 "subtype": "Discovery", 00:28:47.436 "listen_addresses": [ 00:28:47.436 { 00:28:47.436 "trtype": "TCP", 00:28:47.436 "adrfam": "IPv4", 00:28:47.436 "traddr": "10.0.0.2", 00:28:47.436 "trsvcid": "4420" 00:28:47.436 } 00:28:47.436 ], 00:28:47.436 "allow_any_host": true, 00:28:47.436 "hosts": [] 00:28:47.436 }, 00:28:47.436 { 00:28:47.436 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:47.436 "subtype": "NVMe", 00:28:47.436 "listen_addresses": [ 00:28:47.436 { 00:28:47.436 "trtype": "TCP", 00:28:47.436 "adrfam": "IPv4", 00:28:47.436 "traddr": "10.0.0.2", 00:28:47.436 "trsvcid": "4420" 00:28:47.436 } 00:28:47.436 ], 00:28:47.436 "allow_any_host": true, 00:28:47.436 "hosts": [], 00:28:47.436 "serial_number": "SPDK00000000000001", 00:28:47.436 "model_number": "SPDK bdev Controller", 00:28:47.436 "max_namespaces": 32, 00:28:47.436 "min_cntlid": 1, 00:28:47.436 "max_cntlid": 65519, 00:28:47.436 "namespaces": [ 00:28:47.436 { 00:28:47.436 "nsid": 1, 00:28:47.436 "bdev_name": "Malloc0", 00:28:47.436 "name": "Malloc0", 00:28:47.436 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:28:47.436 "eui64": "ABCDEF0123456789", 00:28:47.436 "uuid": "6c9237a2-7ffc-4fe8-a776-83fc1be08c13" 00:28:47.436 } 00:28:47.436 ] 00:28:47.436 } 00:28:47.436 ] 00:28:47.436 09:38:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.436 09:38:31 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:28:47.436 [2024-07-14 09:38:31.619722] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:28:47.436 [2024-07-14 09:38:31.619767] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid835509 ] 00:28:47.436 EAL: No free 2048 kB hugepages reported on node 1 00:28:47.436 [2024-07-14 09:38:31.655118] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:28:47.436 [2024-07-14 09:38:31.655185] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:47.436 [2024-07-14 09:38:31.655195] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:47.436 [2024-07-14 09:38:31.655226] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:47.436 [2024-07-14 09:38:31.655237] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:47.436 [2024-07-14 09:38:31.655530] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:28:47.436 [2024-07-14 09:38:31.655586] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xf32ae0 0 00:28:47.436 [2024-07-14 09:38:31.661894] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:47.436 [2024-07-14 09:38:31.661929] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:47.436 [2024-07-14 09:38:31.661938] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:47.436 [2024-07-14 09:38:31.661944] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:47.436 [2024-07-14 09:38:31.662010] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.436 [2024-07-14 09:38:31.662024] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.436 [2024-07-14 09:38:31.662031] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf32ae0) 00:28:47.436 [2024-07-14 09:38:31.662049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:47.436 [2024-07-14 09:38:31.662075] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89240, cid 0, qid 0 00:28:47.436 [2024-07-14 09:38:31.669891] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.436 [2024-07-14 09:38:31.669909] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.436 [2024-07-14 09:38:31.669916] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.436 [2024-07-14 09:38:31.669923] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf89240) on tqpair=0xf32ae0 00:28:47.436 [2024-07-14 09:38:31.669959] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:47.436 [2024-07-14 09:38:31.669971] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:28:47.436 [2024-07-14 09:38:31.669979] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:28:47.436 [2024-07-14 09:38:31.670001] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.436 [2024-07-14 09:38:31.670010] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.436 [2024-07-14 09:38:31.670016] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf32ae0) 00:28:47.436 [2024-07-14 09:38:31.670028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.436 [2024-07-14 09:38:31.670051] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89240, cid 0, qid 0 00:28:47.436 [2024-07-14 09:38:31.670234] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.436 [2024-07-14 09:38:31.670250] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.436 [2024-07-14 09:38:31.670257] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.436 [2024-07-14 09:38:31.670268] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf89240) on tqpair=0xf32ae0 00:28:47.436 [2024-07-14 09:38:31.670278] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:28:47.436 [2024-07-14 09:38:31.670292] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:28:47.436 [2024-07-14 09:38:31.670305] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.436 [2024-07-14 09:38:31.670312] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.436 [2024-07-14 09:38:31.670319] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf32ae0) 00:28:47.436 [2024-07-14 09:38:31.670329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.436 [2024-07-14 09:38:31.670366] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89240, cid 0, qid 0 00:28:47.436 [2024-07-14 09:38:31.670599] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.436 [2024-07-14 09:38:31.670611] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.436 [2024-07-14 09:38:31.670618] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.436 [2024-07-14 09:38:31.670625] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf89240) on tqpair=0xf32ae0 00:28:47.436 [2024-07-14 09:38:31.670633] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:28:47.436 [2024-07-14 09:38:31.670648] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:28:47.436 [2024-07-14 09:38:31.670660] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.436 [2024-07-14 09:38:31.670667] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.436 [2024-07-14 09:38:31.670674] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf32ae0) 00:28:47.436 [2024-07-14 09:38:31.670684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.437 [2024-07-14 09:38:31.670705] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89240, cid 0, qid 0 00:28:47.437 [2024-07-14 09:38:31.670899] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.437 [2024-07-14 09:38:31.670914] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.437 [2024-07-14 09:38:31.670920] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.437 [2024-07-14 09:38:31.670927] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf89240) on tqpair=0xf32ae0 00:28:47.437 [2024-07-14 09:38:31.670936] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:47.437 [2024-07-14 09:38:31.670953] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.437 [2024-07-14 09:38:31.670962] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.437 [2024-07-14 09:38:31.670968] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf32ae0) 00:28:47.437 [2024-07-14 09:38:31.670979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.437 [2024-07-14 09:38:31.670999] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89240, cid 0, qid 0 00:28:47.437 [2024-07-14 09:38:31.671157] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.437 [2024-07-14 09:38:31.671172] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.437 [2024-07-14 09:38:31.671179] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.437 [2024-07-14 09:38:31.671186] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf89240) on tqpair=0xf32ae0 00:28:47.437 [2024-07-14 09:38:31.671194] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:28:47.437 [2024-07-14 09:38:31.671207] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:28:47.437 [2024-07-14 09:38:31.671221] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:47.437 [2024-07-14 09:38:31.671331] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:28:47.437 [2024-07-14 09:38:31.671340] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:47.437 [2024-07-14 09:38:31.671354] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.437 [2024-07-14 09:38:31.671361] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.437 [2024-07-14 09:38:31.671367] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf32ae0) 00:28:47.437 [2024-07-14 09:38:31.671378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.437 [2024-07-14 09:38:31.671399] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89240, cid 0, qid 0 00:28:47.437 [2024-07-14 09:38:31.671595] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.437 [2024-07-14 09:38:31.671611] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.437 [2024-07-14 09:38:31.671617] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.437 [2024-07-14 09:38:31.671624] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf89240) on tqpair=0xf32ae0 00:28:47.437 [2024-07-14 09:38:31.671633] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:47.437 [2024-07-14 09:38:31.671650] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.437 [2024-07-14 09:38:31.671659] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.437 [2024-07-14 09:38:31.671665] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf32ae0) 00:28:47.437 [2024-07-14 09:38:31.671675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.437 [2024-07-14 09:38:31.671696] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89240, cid 0, qid 0 00:28:47.437 [2024-07-14 09:38:31.671873] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.437 [2024-07-14 09:38:31.671889] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.437 [2024-07-14 09:38:31.671896] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.437 [2024-07-14 09:38:31.671903] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf89240) on tqpair=0xf32ae0 00:28:47.437 [2024-07-14 09:38:31.671910] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:47.437 [2024-07-14 09:38:31.671919] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:28:47.437 [2024-07-14 09:38:31.671933] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:28:47.437 [2024-07-14 09:38:31.671952] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:28:47.437 [2024-07-14 09:38:31.671967] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.437 [2024-07-14 09:38:31.671975] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf32ae0) 00:28:47.437 [2024-07-14 09:38:31.671986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.437 [2024-07-14 09:38:31.672007] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89240, cid 0, qid 0 00:28:47.437 [2024-07-14 09:38:31.672206] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:47.437 [2024-07-14 09:38:31.672219] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:47.437 [2024-07-14 09:38:31.672226] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:47.437 [2024-07-14 09:38:31.672232] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf32ae0): datao=0, datal=4096, cccid=0 00:28:47.437 [2024-07-14 09:38:31.672240] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf89240) on tqpair(0xf32ae0): expected_datao=0, payload_size=4096 00:28:47.437 [2024-07-14 09:38:31.672248] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.437 [2024-07-14 09:38:31.672288] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:47.437 [2024-07-14 09:38:31.672297] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:47.437 [2024-07-14 09:38:31.713020] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.437 [2024-07-14 09:38:31.713039] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.437 [2024-07-14 09:38:31.713047] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.437 [2024-07-14 09:38:31.713054] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf89240) on tqpair=0xf32ae0 00:28:47.437 [2024-07-14 09:38:31.713067] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:28:47.437 [2024-07-14 09:38:31.713081] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:28:47.437 [2024-07-14 09:38:31.713089] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:28:47.437 [2024-07-14 09:38:31.713099] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:28:47.437 [2024-07-14 09:38:31.713107] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:28:47.437 [2024-07-14 09:38:31.713115] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:28:47.437 [2024-07-14 09:38:31.713130] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:28:47.437 [2024-07-14 09:38:31.713143] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.437 [2024-07-14 09:38:31.713150] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.437 [2024-07-14 09:38:31.713157] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf32ae0) 00:28:47.437 [2024-07-14 09:38:31.713168] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:47.437 [2024-07-14 09:38:31.713191] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89240, cid 0, qid 0 00:28:47.437 [2024-07-14 09:38:31.713357] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.437 [2024-07-14 09:38:31.713372] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.437 [2024-07-14 09:38:31.713379] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.437 [2024-07-14 09:38:31.713386] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf89240) on tqpair=0xf32ae0 00:28:47.437 [2024-07-14 09:38:31.713398] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.437 [2024-07-14 09:38:31.713405] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.437 [2024-07-14 09:38:31.713412] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf32ae0) 00:28:47.437 [2024-07-14 09:38:31.713422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.437 [2024-07-14 09:38:31.713432] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.437 [2024-07-14 09:38:31.713439] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.437 [2024-07-14 09:38:31.713445] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xf32ae0) 00:28:47.437 [2024-07-14 09:38:31.713458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.437 [2024-07-14 09:38:31.713469] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.437 [2024-07-14 09:38:31.713476] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.437 [2024-07-14 09:38:31.713482] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xf32ae0) 00:28:47.437 [2024-07-14 09:38:31.713506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.437 [2024-07-14 09:38:31.713517] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.437 [2024-07-14 09:38:31.713523] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.437 [2024-07-14 09:38:31.713529] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf32ae0) 00:28:47.437 [2024-07-14 09:38:31.713538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.437 [2024-07-14 09:38:31.713547] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:28:47.437 [2024-07-14 09:38:31.713566] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:47.437 [2024-07-14 09:38:31.713578] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.437 [2024-07-14 09:38:31.713585] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf32ae0) 00:28:47.437 [2024-07-14 09:38:31.713595] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.437 [2024-07-14 09:38:31.713617] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89240, cid 0, qid 0 00:28:47.437 [2024-07-14 09:38:31.713643] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf893c0, cid 1, qid 0 00:28:47.437 [2024-07-14 09:38:31.713651] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89540, cid 2, qid 0 00:28:47.437 [2024-07-14 09:38:31.713658] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf896c0, cid 3, qid 0 00:28:47.437 [2024-07-14 09:38:31.713666] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89840, cid 4, qid 0 00:28:47.437 [2024-07-14 09:38:31.713878] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.437 [2024-07-14 09:38:31.713892] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.437 [2024-07-14 09:38:31.713899] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.437 [2024-07-14 09:38:31.713906] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf89840) on tqpair=0xf32ae0 00:28:47.438 [2024-07-14 09:38:31.713915] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:28:47.438 [2024-07-14 09:38:31.713923] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:28:47.438 [2024-07-14 09:38:31.713941] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.438 [2024-07-14 09:38:31.713950] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf32ae0) 00:28:47.438 [2024-07-14 09:38:31.713960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.438 [2024-07-14 09:38:31.713982] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89840, cid 4, qid 0 00:28:47.438 [2024-07-14 09:38:31.714146] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:47.438 [2024-07-14 09:38:31.714161] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:47.438 [2024-07-14 09:38:31.714168] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:47.438 [2024-07-14 09:38:31.714179] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf32ae0): datao=0, datal=4096, cccid=4 00:28:47.438 [2024-07-14 09:38:31.714187] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf89840) on tqpair(0xf32ae0): expected_datao=0, payload_size=4096 00:28:47.438 [2024-07-14 09:38:31.714195] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.438 [2024-07-14 09:38:31.714228] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:47.438 [2024-07-14 09:38:31.714237] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:47.438 [2024-07-14 09:38:31.714348] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.438 [2024-07-14 09:38:31.714363] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.438 [2024-07-14 09:38:31.714370] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.438 [2024-07-14 09:38:31.714376] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf89840) on tqpair=0xf32ae0 00:28:47.438 [2024-07-14 09:38:31.714394] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:28:47.438 [2024-07-14 09:38:31.714429] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.438 [2024-07-14 09:38:31.714440] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf32ae0) 00:28:47.438 [2024-07-14 09:38:31.714451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.438 [2024-07-14 09:38:31.714462] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.438 [2024-07-14 09:38:31.714469] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.438 [2024-07-14 09:38:31.714476] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf32ae0) 00:28:47.438 [2024-07-14 09:38:31.714484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.438 [2024-07-14 09:38:31.714510] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89840, cid 4, qid 0 00:28:47.438 [2024-07-14 09:38:31.714523] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf899c0, cid 5, qid 0 00:28:47.438 [2024-07-14 09:38:31.714761] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:47.438 [2024-07-14 09:38:31.714777] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:47.438 [2024-07-14 09:38:31.714784] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:47.438 [2024-07-14 09:38:31.714791] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf32ae0): datao=0, datal=1024, cccid=4 00:28:47.438 [2024-07-14 09:38:31.714798] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf89840) on tqpair(0xf32ae0): expected_datao=0, payload_size=1024 00:28:47.438 [2024-07-14 09:38:31.714806] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.438 [2024-07-14 09:38:31.714816] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:47.438 [2024-07-14 09:38:31.714823] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:47.438 [2024-07-14 09:38:31.714831] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.438 [2024-07-14 09:38:31.714855] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.438 [2024-07-14 09:38:31.714861] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.438 [2024-07-14 09:38:31.714875] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf899c0) on tqpair=0xf32ae0 00:28:47.438 [2024-07-14 09:38:31.755024] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.438 [2024-07-14 09:38:31.755043] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.438 [2024-07-14 09:38:31.755051] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.438 [2024-07-14 09:38:31.755058] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf89840) on tqpair=0xf32ae0 00:28:47.438 [2024-07-14 09:38:31.755074] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.438 [2024-07-14 09:38:31.755083] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf32ae0) 00:28:47.438 [2024-07-14 09:38:31.755098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.438 [2024-07-14 09:38:31.755129] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89840, cid 4, qid 0 00:28:47.438 [2024-07-14 09:38:31.755323] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:47.438 [2024-07-14 09:38:31.755339] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:47.438 [2024-07-14 09:38:31.755346] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:47.438 [2024-07-14 09:38:31.755352] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf32ae0): datao=0, datal=3072, cccid=4 00:28:47.438 [2024-07-14 09:38:31.755360] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf89840) on tqpair(0xf32ae0): expected_datao=0, payload_size=3072 00:28:47.438 [2024-07-14 09:38:31.755368] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.438 [2024-07-14 09:38:31.755378] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:47.438 [2024-07-14 09:38:31.755385] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:47.438 [2024-07-14 09:38:31.755430] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.438 [2024-07-14 09:38:31.755442] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.438 [2024-07-14 09:38:31.755449] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.438 [2024-07-14 09:38:31.755456] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf89840) on tqpair=0xf32ae0 00:28:47.438 [2024-07-14 09:38:31.755470] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.438 [2024-07-14 09:38:31.755479] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf32ae0) 00:28:47.438 [2024-07-14 09:38:31.755490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.438 [2024-07-14 09:38:31.755518] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89840, cid 4, qid 0 00:28:47.438 [2024-07-14 09:38:31.755698] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:47.438 [2024-07-14 09:38:31.755711] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:47.438 [2024-07-14 09:38:31.755718] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:47.438 [2024-07-14 09:38:31.755724] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf32ae0): datao=0, datal=8, cccid=4 00:28:47.438 [2024-07-14 09:38:31.755732] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf89840) on tqpair(0xf32ae0): expected_datao=0, payload_size=8 00:28:47.438 [2024-07-14 09:38:31.755739] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.438 [2024-07-14 09:38:31.755749] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:47.438 [2024-07-14 09:38:31.755756] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:47.438 [2024-07-14 09:38:31.796025] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.438 [2024-07-14 09:38:31.796044] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.438 [2024-07-14 09:38:31.796052] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.438 [2024-07-14 09:38:31.796059] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf89840) on tqpair=0xf32ae0 00:28:47.438 ===================================================== 00:28:47.438 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:47.438 ===================================================== 00:28:47.438 Controller Capabilities/Features 00:28:47.438 ================================ 00:28:47.438 Vendor ID: 0000 00:28:47.438 Subsystem Vendor ID: 0000 00:28:47.438 Serial Number: .................... 00:28:47.438 Model Number: ........................................ 00:28:47.438 Firmware Version: 24.09 00:28:47.438 Recommended Arb Burst: 0 00:28:47.438 IEEE OUI Identifier: 00 00 00 00:28:47.438 Multi-path I/O 00:28:47.438 May have multiple subsystem ports: No 00:28:47.438 May have multiple controllers: No 00:28:47.438 Associated with SR-IOV VF: No 00:28:47.438 Max Data Transfer Size: 131072 00:28:47.438 Max Number of Namespaces: 0 00:28:47.438 Max Number of I/O Queues: 1024 00:28:47.438 NVMe Specification Version (VS): 1.3 00:28:47.438 NVMe Specification Version (Identify): 1.3 00:28:47.438 Maximum Queue Entries: 128 00:28:47.438 Contiguous Queues Required: Yes 00:28:47.438 Arbitration Mechanisms Supported 00:28:47.438 Weighted Round Robin: Not Supported 00:28:47.438 Vendor Specific: Not Supported 00:28:47.438 Reset Timeout: 15000 ms 00:28:47.438 Doorbell Stride: 4 bytes 00:28:47.438 NVM Subsystem Reset: Not Supported 00:28:47.438 Command Sets Supported 00:28:47.438 NVM Command Set: Supported 00:28:47.438 Boot Partition: Not Supported 00:28:47.438 Memory Page Size Minimum: 4096 bytes 00:28:47.438 Memory Page Size Maximum: 4096 bytes 00:28:47.438 Persistent Memory Region: Not Supported 00:28:47.438 Optional Asynchronous Events Supported 00:28:47.438 Namespace Attribute Notices: Not Supported 00:28:47.438 Firmware Activation Notices: Not Supported 00:28:47.438 ANA Change Notices: Not Supported 00:28:47.438 PLE Aggregate Log Change Notices: Not Supported 00:28:47.438 LBA Status Info Alert Notices: Not Supported 00:28:47.438 EGE Aggregate Log Change Notices: Not Supported 00:28:47.438 Normal NVM Subsystem Shutdown event: Not Supported 00:28:47.438 Zone Descriptor Change Notices: Not Supported 00:28:47.438 Discovery Log Change Notices: Supported 00:28:47.438 Controller Attributes 00:28:47.438 128-bit Host Identifier: Not Supported 00:28:47.438 Non-Operational Permissive Mode: Not Supported 00:28:47.438 NVM Sets: Not Supported 00:28:47.438 Read Recovery Levels: Not Supported 00:28:47.438 Endurance Groups: Not Supported 00:28:47.438 Predictable Latency Mode: Not Supported 00:28:47.438 Traffic Based Keep ALive: Not Supported 00:28:47.438 Namespace Granularity: Not Supported 00:28:47.438 SQ Associations: Not Supported 00:28:47.438 UUID List: Not Supported 00:28:47.438 Multi-Domain Subsystem: Not Supported 00:28:47.438 Fixed Capacity Management: Not Supported 00:28:47.438 Variable Capacity Management: Not Supported 00:28:47.438 Delete Endurance Group: Not Supported 00:28:47.438 Delete NVM Set: Not Supported 00:28:47.438 Extended LBA Formats Supported: Not Supported 00:28:47.439 Flexible Data Placement Supported: Not Supported 00:28:47.439 00:28:47.439 Controller Memory Buffer Support 00:28:47.439 ================================ 00:28:47.439 Supported: No 00:28:47.439 00:28:47.439 Persistent Memory Region Support 00:28:47.439 ================================ 00:28:47.439 Supported: No 00:28:47.439 00:28:47.439 Admin Command Set Attributes 00:28:47.439 ============================ 00:28:47.439 Security Send/Receive: Not Supported 00:28:47.439 Format NVM: Not Supported 00:28:47.439 Firmware Activate/Download: Not Supported 00:28:47.439 Namespace Management: Not Supported 00:28:47.439 Device Self-Test: Not Supported 00:28:47.439 Directives: Not Supported 00:28:47.439 NVMe-MI: Not Supported 00:28:47.439 Virtualization Management: Not Supported 00:28:47.439 Doorbell Buffer Config: Not Supported 00:28:47.439 Get LBA Status Capability: Not Supported 00:28:47.439 Command & Feature Lockdown Capability: Not Supported 00:28:47.439 Abort Command Limit: 1 00:28:47.439 Async Event Request Limit: 4 00:28:47.439 Number of Firmware Slots: N/A 00:28:47.439 Firmware Slot 1 Read-Only: N/A 00:28:47.439 Firmware Activation Without Reset: N/A 00:28:47.439 Multiple Update Detection Support: N/A 00:28:47.439 Firmware Update Granularity: No Information Provided 00:28:47.439 Per-Namespace SMART Log: No 00:28:47.439 Asymmetric Namespace Access Log Page: Not Supported 00:28:47.439 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:47.439 Command Effects Log Page: Not Supported 00:28:47.439 Get Log Page Extended Data: Supported 00:28:47.439 Telemetry Log Pages: Not Supported 00:28:47.439 Persistent Event Log Pages: Not Supported 00:28:47.439 Supported Log Pages Log Page: May Support 00:28:47.439 Commands Supported & Effects Log Page: Not Supported 00:28:47.439 Feature Identifiers & Effects Log Page:May Support 00:28:47.439 NVMe-MI Commands & Effects Log Page: May Support 00:28:47.439 Data Area 4 for Telemetry Log: Not Supported 00:28:47.439 Error Log Page Entries Supported: 128 00:28:47.439 Keep Alive: Not Supported 00:28:47.439 00:28:47.439 NVM Command Set Attributes 00:28:47.439 ========================== 00:28:47.439 Submission Queue Entry Size 00:28:47.439 Max: 1 00:28:47.439 Min: 1 00:28:47.439 Completion Queue Entry Size 00:28:47.439 Max: 1 00:28:47.439 Min: 1 00:28:47.439 Number of Namespaces: 0 00:28:47.439 Compare Command: Not Supported 00:28:47.439 Write Uncorrectable Command: Not Supported 00:28:47.439 Dataset Management Command: Not Supported 00:28:47.439 Write Zeroes Command: Not Supported 00:28:47.439 Set Features Save Field: Not Supported 00:28:47.439 Reservations: Not Supported 00:28:47.439 Timestamp: Not Supported 00:28:47.439 Copy: Not Supported 00:28:47.439 Volatile Write Cache: Not Present 00:28:47.439 Atomic Write Unit (Normal): 1 00:28:47.439 Atomic Write Unit (PFail): 1 00:28:47.439 Atomic Compare & Write Unit: 1 00:28:47.439 Fused Compare & Write: Supported 00:28:47.439 Scatter-Gather List 00:28:47.439 SGL Command Set: Supported 00:28:47.439 SGL Keyed: Supported 00:28:47.439 SGL Bit Bucket Descriptor: Not Supported 00:28:47.439 SGL Metadata Pointer: Not Supported 00:28:47.439 Oversized SGL: Not Supported 00:28:47.439 SGL Metadata Address: Not Supported 00:28:47.439 SGL Offset: Supported 00:28:47.439 Transport SGL Data Block: Not Supported 00:28:47.439 Replay Protected Memory Block: Not Supported 00:28:47.439 00:28:47.439 Firmware Slot Information 00:28:47.439 ========================= 00:28:47.439 Active slot: 0 00:28:47.439 00:28:47.439 00:28:47.439 Error Log 00:28:47.439 ========= 00:28:47.439 00:28:47.439 Active Namespaces 00:28:47.439 ================= 00:28:47.439 Discovery Log Page 00:28:47.439 ================== 00:28:47.439 Generation Counter: 2 00:28:47.439 Number of Records: 2 00:28:47.439 Record Format: 0 00:28:47.439 00:28:47.439 Discovery Log Entry 0 00:28:47.439 ---------------------- 00:28:47.439 Transport Type: 3 (TCP) 00:28:47.439 Address Family: 1 (IPv4) 00:28:47.439 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:47.439 Entry Flags: 00:28:47.439 Duplicate Returned Information: 1 00:28:47.439 Explicit Persistent Connection Support for Discovery: 1 00:28:47.439 Transport Requirements: 00:28:47.439 Secure Channel: Not Required 00:28:47.439 Port ID: 0 (0x0000) 00:28:47.439 Controller ID: 65535 (0xffff) 00:28:47.439 Admin Max SQ Size: 128 00:28:47.439 Transport Service Identifier: 4420 00:28:47.439 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:47.439 Transport Address: 10.0.0.2 00:28:47.439 Discovery Log Entry 1 00:28:47.439 ---------------------- 00:28:47.439 Transport Type: 3 (TCP) 00:28:47.439 Address Family: 1 (IPv4) 00:28:47.439 Subsystem Type: 2 (NVM Subsystem) 00:28:47.439 Entry Flags: 00:28:47.439 Duplicate Returned Information: 0 00:28:47.439 Explicit Persistent Connection Support for Discovery: 0 00:28:47.439 Transport Requirements: 00:28:47.439 Secure Channel: Not Required 00:28:47.439 Port ID: 0 (0x0000) 00:28:47.439 Controller ID: 65535 (0xffff) 00:28:47.439 Admin Max SQ Size: 128 00:28:47.439 Transport Service Identifier: 4420 00:28:47.439 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:28:47.439 Transport Address: 10.0.0.2 [2024-07-14 09:38:31.796173] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:28:47.439 [2024-07-14 09:38:31.796194] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf89240) on tqpair=0xf32ae0 00:28:47.439 [2024-07-14 09:38:31.796206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.439 [2024-07-14 09:38:31.796215] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf893c0) on tqpair=0xf32ae0 00:28:47.439 [2024-07-14 09:38:31.796223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.439 [2024-07-14 09:38:31.796234] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf89540) on tqpair=0xf32ae0 00:28:47.439 [2024-07-14 09:38:31.796243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.439 [2024-07-14 09:38:31.796252] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf896c0) on tqpair=0xf32ae0 00:28:47.439 [2024-07-14 09:38:31.796259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.439 [2024-07-14 09:38:31.796277] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.439 [2024-07-14 09:38:31.796286] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.439 [2024-07-14 09:38:31.796293] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf32ae0) 00:28:47.439 [2024-07-14 09:38:31.796319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.439 [2024-07-14 09:38:31.796345] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf896c0, cid 3, qid 0 00:28:47.439 [2024-07-14 09:38:31.796553] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.439 [2024-07-14 09:38:31.796569] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.439 [2024-07-14 09:38:31.796576] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.439 [2024-07-14 09:38:31.796583] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf896c0) on tqpair=0xf32ae0 00:28:47.439 [2024-07-14 09:38:31.796594] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.439 [2024-07-14 09:38:31.796602] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.439 [2024-07-14 09:38:31.796609] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf32ae0) 00:28:47.439 [2024-07-14 09:38:31.796619] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.439 [2024-07-14 09:38:31.796646] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf896c0, cid 3, qid 0 00:28:47.439 [2024-07-14 09:38:31.796808] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.439 [2024-07-14 09:38:31.796820] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.439 [2024-07-14 09:38:31.796827] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.439 [2024-07-14 09:38:31.796834] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf896c0) on tqpair=0xf32ae0 00:28:47.439 [2024-07-14 09:38:31.796842] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:28:47.439 [2024-07-14 09:38:31.796851] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:28:47.439 [2024-07-14 09:38:31.796874] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.439 [2024-07-14 09:38:31.796885] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.439 [2024-07-14 09:38:31.796892] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf32ae0) 00:28:47.439 [2024-07-14 09:38:31.796902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.439 [2024-07-14 09:38:31.796924] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf896c0, cid 3, qid 0 00:28:47.439 [2024-07-14 09:38:31.797078] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.439 [2024-07-14 09:38:31.797092] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.439 [2024-07-14 09:38:31.797098] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.439 [2024-07-14 09:38:31.797109] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf896c0) on tqpair=0xf32ae0 00:28:47.439 [2024-07-14 09:38:31.797129] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.439 [2024-07-14 09:38:31.797140] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.439 [2024-07-14 09:38:31.797146] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf32ae0) 00:28:47.439 [2024-07-14 09:38:31.797161] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.439 [2024-07-14 09:38:31.797183] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf896c0, cid 3, qid 0 00:28:47.439 [2024-07-14 09:38:31.797341] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.439 [2024-07-14 09:38:31.797357] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.440 [2024-07-14 09:38:31.797363] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.440 [2024-07-14 09:38:31.797370] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf896c0) on tqpair=0xf32ae0 00:28:47.440 [2024-07-14 09:38:31.797387] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.440 [2024-07-14 09:38:31.797396] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.440 [2024-07-14 09:38:31.797403] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf32ae0) 00:28:47.440 [2024-07-14 09:38:31.797413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.440 [2024-07-14 09:38:31.797433] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf896c0, cid 3, qid 0 00:28:47.440 [2024-07-14 09:38:31.797585] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.440 [2024-07-14 09:38:31.797597] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.440 [2024-07-14 09:38:31.797604] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.440 [2024-07-14 09:38:31.797610] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf896c0) on tqpair=0xf32ae0 00:28:47.440 [2024-07-14 09:38:31.797626] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.440 [2024-07-14 09:38:31.797635] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.440 [2024-07-14 09:38:31.797642] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf32ae0) 00:28:47.440 [2024-07-14 09:38:31.797652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.440 [2024-07-14 09:38:31.797672] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf896c0, cid 3, qid 0 00:28:47.440 [2024-07-14 09:38:31.797821] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.440 [2024-07-14 09:38:31.797833] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.440 [2024-07-14 09:38:31.797840] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.440 [2024-07-14 09:38:31.797847] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf896c0) on tqpair=0xf32ae0 00:28:47.440 [2024-07-14 09:38:31.797862] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.440 [2024-07-14 09:38:31.801886] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.440 [2024-07-14 09:38:31.801894] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf32ae0) 00:28:47.440 [2024-07-14 09:38:31.801905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.440 [2024-07-14 09:38:31.801943] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf896c0, cid 3, qid 0 00:28:47.440 [2024-07-14 09:38:31.802113] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.440 [2024-07-14 09:38:31.802125] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.440 [2024-07-14 09:38:31.802132] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.440 [2024-07-14 09:38:31.802139] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf896c0) on tqpair=0xf32ae0 00:28:47.440 [2024-07-14 09:38:31.802152] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:28:47.440 00:28:47.440 09:38:31 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:28:47.440 [2024-07-14 09:38:31.836304] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:28:47.440 [2024-07-14 09:38:31.836351] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid835545 ] 00:28:47.440 EAL: No free 2048 kB hugepages reported on node 1 00:28:47.701 [2024-07-14 09:38:31.870727] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:28:47.701 [2024-07-14 09:38:31.870784] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:47.701 [2024-07-14 09:38:31.870794] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:47.701 [2024-07-14 09:38:31.870817] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:47.701 [2024-07-14 09:38:31.870827] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:47.701 [2024-07-14 09:38:31.875179] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:28:47.701 [2024-07-14 09:38:31.875220] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x8faae0 0 00:28:47.701 [2024-07-14 09:38:31.888881] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:47.701 [2024-07-14 09:38:31.888900] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:47.701 [2024-07-14 09:38:31.888908] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:47.701 [2024-07-14 09:38:31.888914] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:47.701 [2024-07-14 09:38:31.888968] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.701 [2024-07-14 09:38:31.888980] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.701 [2024-07-14 09:38:31.888987] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8faae0) 00:28:47.701 [2024-07-14 09:38:31.889001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:47.701 [2024-07-14 09:38:31.889028] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x951240, cid 0, qid 0 00:28:47.701 [2024-07-14 09:38:31.895884] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.701 [2024-07-14 09:38:31.895903] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.701 [2024-07-14 09:38:31.895911] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.701 [2024-07-14 09:38:31.895919] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x951240) on tqpair=0x8faae0 00:28:47.701 [2024-07-14 09:38:31.895938] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:47.701 [2024-07-14 09:38:31.895950] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:28:47.701 [2024-07-14 09:38:31.895959] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:28:47.701 [2024-07-14 09:38:31.895978] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.701 [2024-07-14 09:38:31.895987] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.701 [2024-07-14 09:38:31.895994] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8faae0) 00:28:47.701 [2024-07-14 09:38:31.896005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.701 [2024-07-14 09:38:31.896029] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x951240, cid 0, qid 0 00:28:47.701 [2024-07-14 09:38:31.896203] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.701 [2024-07-14 09:38:31.896235] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.701 [2024-07-14 09:38:31.896243] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.701 [2024-07-14 09:38:31.896250] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x951240) on tqpair=0x8faae0 00:28:47.701 [2024-07-14 09:38:31.896258] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:28:47.701 [2024-07-14 09:38:31.896271] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:28:47.701 [2024-07-14 09:38:31.896284] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.701 [2024-07-14 09:38:31.896292] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.701 [2024-07-14 09:38:31.896298] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8faae0) 00:28:47.701 [2024-07-14 09:38:31.896309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.701 [2024-07-14 09:38:31.896345] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x951240, cid 0, qid 0 00:28:47.701 [2024-07-14 09:38:31.896573] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.701 [2024-07-14 09:38:31.896589] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.701 [2024-07-14 09:38:31.896596] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.701 [2024-07-14 09:38:31.896602] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x951240) on tqpair=0x8faae0 00:28:47.701 [2024-07-14 09:38:31.896611] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:28:47.701 [2024-07-14 09:38:31.896625] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:28:47.701 [2024-07-14 09:38:31.896637] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.701 [2024-07-14 09:38:31.896645] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.701 [2024-07-14 09:38:31.896651] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8faae0) 00:28:47.701 [2024-07-14 09:38:31.896662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.701 [2024-07-14 09:38:31.896683] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x951240, cid 0, qid 0 00:28:47.701 [2024-07-14 09:38:31.896966] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.701 [2024-07-14 09:38:31.896981] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.701 [2024-07-14 09:38:31.896988] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.701 [2024-07-14 09:38:31.896995] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x951240) on tqpair=0x8faae0 00:28:47.701 [2024-07-14 09:38:31.897003] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:47.701 [2024-07-14 09:38:31.897021] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.701 [2024-07-14 09:38:31.897030] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.701 [2024-07-14 09:38:31.897036] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8faae0) 00:28:47.701 [2024-07-14 09:38:31.897047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.701 [2024-07-14 09:38:31.897068] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x951240, cid 0, qid 0 00:28:47.701 [2024-07-14 09:38:31.897268] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.701 [2024-07-14 09:38:31.897283] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.701 [2024-07-14 09:38:31.897290] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.701 [2024-07-14 09:38:31.897297] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x951240) on tqpair=0x8faae0 00:28:47.701 [2024-07-14 09:38:31.897308] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:28:47.701 [2024-07-14 09:38:31.897317] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:28:47.701 [2024-07-14 09:38:31.897331] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:47.701 [2024-07-14 09:38:31.897441] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:28:47.701 [2024-07-14 09:38:31.897448] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:47.701 [2024-07-14 09:38:31.897461] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.701 [2024-07-14 09:38:31.897468] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.701 [2024-07-14 09:38:31.897475] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8faae0) 00:28:47.701 [2024-07-14 09:38:31.897485] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.702 [2024-07-14 09:38:31.897506] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x951240, cid 0, qid 0 00:28:47.702 [2024-07-14 09:38:31.897709] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.702 [2024-07-14 09:38:31.897724] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.702 [2024-07-14 09:38:31.897731] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.702 [2024-07-14 09:38:31.897737] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x951240) on tqpair=0x8faae0 00:28:47.702 [2024-07-14 09:38:31.897746] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:47.702 [2024-07-14 09:38:31.897763] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.702 [2024-07-14 09:38:31.897773] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.702 [2024-07-14 09:38:31.897779] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8faae0) 00:28:47.702 [2024-07-14 09:38:31.897790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.702 [2024-07-14 09:38:31.897810] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x951240, cid 0, qid 0 00:28:47.702 [2024-07-14 09:38:31.898084] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.702 [2024-07-14 09:38:31.898101] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.702 [2024-07-14 09:38:31.898108] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.702 [2024-07-14 09:38:31.898114] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x951240) on tqpair=0x8faae0 00:28:47.702 [2024-07-14 09:38:31.898122] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:47.702 [2024-07-14 09:38:31.898130] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:28:47.702 [2024-07-14 09:38:31.898154] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:28:47.702 [2024-07-14 09:38:31.898169] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:28:47.702 [2024-07-14 09:38:31.898183] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.702 [2024-07-14 09:38:31.898190] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8faae0) 00:28:47.702 [2024-07-14 09:38:31.898216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.702 [2024-07-14 09:38:31.898238] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x951240, cid 0, qid 0 00:28:47.702 [2024-07-14 09:38:31.898501] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:47.702 [2024-07-14 09:38:31.898517] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:47.702 [2024-07-14 09:38:31.898524] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:47.702 [2024-07-14 09:38:31.898530] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8faae0): datao=0, datal=4096, cccid=0 00:28:47.702 [2024-07-14 09:38:31.898538] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x951240) on tqpair(0x8faae0): expected_datao=0, payload_size=4096 00:28:47.702 [2024-07-14 09:38:31.898545] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.702 [2024-07-14 09:38:31.898556] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:47.702 [2024-07-14 09:38:31.898563] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:47.702 [2024-07-14 09:38:31.898612] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.702 [2024-07-14 09:38:31.898623] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.702 [2024-07-14 09:38:31.898630] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.702 [2024-07-14 09:38:31.898637] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x951240) on tqpair=0x8faae0 00:28:47.702 [2024-07-14 09:38:31.898647] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:28:47.702 [2024-07-14 09:38:31.898660] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:28:47.702 [2024-07-14 09:38:31.898668] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:28:47.702 [2024-07-14 09:38:31.898675] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:28:47.702 [2024-07-14 09:38:31.898683] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:28:47.702 [2024-07-14 09:38:31.898691] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:28:47.702 [2024-07-14 09:38:31.898706] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:28:47.702 [2024-07-14 09:38:31.898718] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.702 [2024-07-14 09:38:31.898726] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.702 [2024-07-14 09:38:31.898732] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8faae0) 00:28:47.702 [2024-07-14 09:38:31.898743] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:47.702 [2024-07-14 09:38:31.898764] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x951240, cid 0, qid 0 00:28:47.702 [2024-07-14 09:38:31.899031] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.702 [2024-07-14 09:38:31.899046] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.702 [2024-07-14 09:38:31.899053] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.702 [2024-07-14 09:38:31.899060] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x951240) on tqpair=0x8faae0 00:28:47.702 [2024-07-14 09:38:31.899070] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.702 [2024-07-14 09:38:31.899078] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.702 [2024-07-14 09:38:31.899084] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8faae0) 00:28:47.702 [2024-07-14 09:38:31.899094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.702 [2024-07-14 09:38:31.899104] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.702 [2024-07-14 09:38:31.899111] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.702 [2024-07-14 09:38:31.899117] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x8faae0) 00:28:47.702 [2024-07-14 09:38:31.899130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.702 [2024-07-14 09:38:31.899141] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.702 [2024-07-14 09:38:31.899148] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.702 [2024-07-14 09:38:31.899154] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x8faae0) 00:28:47.702 [2024-07-14 09:38:31.899163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.702 [2024-07-14 09:38:31.899188] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.702 [2024-07-14 09:38:31.899194] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.702 [2024-07-14 09:38:31.899200] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8faae0) 00:28:47.702 [2024-07-14 09:38:31.899209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.702 [2024-07-14 09:38:31.899218] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:47.702 [2024-07-14 09:38:31.899240] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:47.702 [2024-07-14 09:38:31.899252] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.702 [2024-07-14 09:38:31.899260] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8faae0) 00:28:47.702 [2024-07-14 09:38:31.899269] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.702 [2024-07-14 09:38:31.899296] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x951240, cid 0, qid 0 00:28:47.702 [2024-07-14 09:38:31.899322] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9513c0, cid 1, qid 0 00:28:47.702 [2024-07-14 09:38:31.899330] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x951540, cid 2, qid 0 00:28:47.702 [2024-07-14 09:38:31.899338] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9516c0, cid 3, qid 0 00:28:47.702 [2024-07-14 09:38:31.899345] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x951840, cid 4, qid 0 00:28:47.702 [2024-07-14 09:38:31.899559] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.702 [2024-07-14 09:38:31.899574] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.702 [2024-07-14 09:38:31.899580] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.702 [2024-07-14 09:38:31.899587] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x951840) on tqpair=0x8faae0 00:28:47.702 [2024-07-14 09:38:31.899595] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:28:47.702 [2024-07-14 09:38:31.899605] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:28:47.702 [2024-07-14 09:38:31.899634] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:28:47.702 [2024-07-14 09:38:31.899645] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:28:47.702 [2024-07-14 09:38:31.899655] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.702 [2024-07-14 09:38:31.899663] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.702 [2024-07-14 09:38:31.899669] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8faae0) 00:28:47.702 [2024-07-14 09:38:31.899679] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:47.702 [2024-07-14 09:38:31.899699] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x951840, cid 4, qid 0 00:28:47.702 [2024-07-14 09:38:31.903878] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.702 [2024-07-14 09:38:31.903896] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.702 [2024-07-14 09:38:31.903904] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.702 [2024-07-14 09:38:31.903910] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x951840) on tqpair=0x8faae0 00:28:47.702 [2024-07-14 09:38:31.903974] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:28:47.702 [2024-07-14 09:38:31.903993] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:28:47.702 [2024-07-14 09:38:31.904007] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.702 [2024-07-14 09:38:31.904015] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8faae0) 00:28:47.702 [2024-07-14 09:38:31.904026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.702 [2024-07-14 09:38:31.904048] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x951840, cid 4, qid 0 00:28:47.702 [2024-07-14 09:38:31.904234] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:47.702 [2024-07-14 09:38:31.904250] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:47.702 [2024-07-14 09:38:31.904257] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:47.702 [2024-07-14 09:38:31.904264] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8faae0): datao=0, datal=4096, cccid=4 00:28:47.702 [2024-07-14 09:38:31.904271] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x951840) on tqpair(0x8faae0): expected_datao=0, payload_size=4096 00:28:47.703 [2024-07-14 09:38:31.904279] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.703 [2024-07-14 09:38:31.904289] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:47.703 [2024-07-14 09:38:31.904296] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:47.703 [2024-07-14 09:38:31.904370] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.703 [2024-07-14 09:38:31.904382] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.703 [2024-07-14 09:38:31.904389] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.703 [2024-07-14 09:38:31.904395] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x951840) on tqpair=0x8faae0 00:28:47.703 [2024-07-14 09:38:31.904409] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:28:47.703 [2024-07-14 09:38:31.904429] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:28:47.703 [2024-07-14 09:38:31.904447] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:28:47.703 [2024-07-14 09:38:31.904460] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.703 [2024-07-14 09:38:31.904468] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8faae0) 00:28:47.703 [2024-07-14 09:38:31.904478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.703 [2024-07-14 09:38:31.904499] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x951840, cid 4, qid 0 00:28:47.703 [2024-07-14 09:38:31.904694] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:47.703 [2024-07-14 09:38:31.904706] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:47.703 [2024-07-14 09:38:31.904713] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:47.703 [2024-07-14 09:38:31.904719] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8faae0): datao=0, datal=4096, cccid=4 00:28:47.703 [2024-07-14 09:38:31.904727] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x951840) on tqpair(0x8faae0): expected_datao=0, payload_size=4096 00:28:47.703 [2024-07-14 09:38:31.904739] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.703 [2024-07-14 09:38:31.904749] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:47.703 [2024-07-14 09:38:31.904756] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:47.703 [2024-07-14 09:38:31.904810] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.703 [2024-07-14 09:38:31.904821] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.703 [2024-07-14 09:38:31.904827] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.703 [2024-07-14 09:38:31.904834] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x951840) on tqpair=0x8faae0 00:28:47.703 [2024-07-14 09:38:31.904864] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:28:47.703 [2024-07-14 09:38:31.904899] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:28:47.703 [2024-07-14 09:38:31.904913] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.703 [2024-07-14 09:38:31.904920] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8faae0) 00:28:47.703 [2024-07-14 09:38:31.904931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.703 [2024-07-14 09:38:31.904952] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x951840, cid 4, qid 0 00:28:47.703 [2024-07-14 09:38:31.905128] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:47.703 [2024-07-14 09:38:31.905140] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:47.703 [2024-07-14 09:38:31.905147] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:47.703 [2024-07-14 09:38:31.905153] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8faae0): datao=0, datal=4096, cccid=4 00:28:47.703 [2024-07-14 09:38:31.905161] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x951840) on tqpair(0x8faae0): expected_datao=0, payload_size=4096 00:28:47.703 [2024-07-14 09:38:31.905168] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.703 [2024-07-14 09:38:31.905178] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:47.703 [2024-07-14 09:38:31.905185] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:47.703 [2024-07-14 09:38:31.905259] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.703 [2024-07-14 09:38:31.905270] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.703 [2024-07-14 09:38:31.905277] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.703 [2024-07-14 09:38:31.905284] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x951840) on tqpair=0x8faae0 00:28:47.703 [2024-07-14 09:38:31.905305] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:28:47.703 [2024-07-14 09:38:31.905320] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:28:47.703 [2024-07-14 09:38:31.905334] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:28:47.703 [2024-07-14 09:38:31.905344] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:28:47.703 [2024-07-14 09:38:31.905353] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:28:47.703 [2024-07-14 09:38:31.905361] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:28:47.703 [2024-07-14 09:38:31.905370] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:28:47.703 [2024-07-14 09:38:31.905383] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:28:47.703 [2024-07-14 09:38:31.905393] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:28:47.703 [2024-07-14 09:38:31.905427] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.703 [2024-07-14 09:38:31.905436] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8faae0) 00:28:47.703 [2024-07-14 09:38:31.905446] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.703 [2024-07-14 09:38:31.905457] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.703 [2024-07-14 09:38:31.905464] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.703 [2024-07-14 09:38:31.905470] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8faae0) 00:28:47.703 [2024-07-14 09:38:31.905479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.703 [2024-07-14 09:38:31.905503] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x951840, cid 4, qid 0 00:28:47.703 [2024-07-14 09:38:31.905531] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9519c0, cid 5, qid 0 00:28:47.703 [2024-07-14 09:38:31.905719] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.703 [2024-07-14 09:38:31.905731] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.703 [2024-07-14 09:38:31.905738] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.703 [2024-07-14 09:38:31.905745] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x951840) on tqpair=0x8faae0 00:28:47.703 [2024-07-14 09:38:31.905755] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.703 [2024-07-14 09:38:31.905764] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.703 [2024-07-14 09:38:31.905770] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.703 [2024-07-14 09:38:31.905777] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9519c0) on tqpair=0x8faae0 00:28:47.703 [2024-07-14 09:38:31.905792] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.703 [2024-07-14 09:38:31.905801] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8faae0) 00:28:47.703 [2024-07-14 09:38:31.905812] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.703 [2024-07-14 09:38:31.905847] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9519c0, cid 5, qid 0 00:28:47.703 [2024-07-14 09:38:31.906085] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.703 [2024-07-14 09:38:31.906101] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.703 [2024-07-14 09:38:31.906108] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.703 [2024-07-14 09:38:31.906115] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9519c0) on tqpair=0x8faae0 00:28:47.703 [2024-07-14 09:38:31.906131] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.703 [2024-07-14 09:38:31.906141] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8faae0) 00:28:47.703 [2024-07-14 09:38:31.906152] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.703 [2024-07-14 09:38:31.906173] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9519c0, cid 5, qid 0 00:28:47.703 [2024-07-14 09:38:31.906320] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.703 [2024-07-14 09:38:31.906333] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.703 [2024-07-14 09:38:31.906339] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.703 [2024-07-14 09:38:31.906346] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9519c0) on tqpair=0x8faae0 00:28:47.703 [2024-07-14 09:38:31.906361] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.703 [2024-07-14 09:38:31.906374] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8faae0) 00:28:47.703 [2024-07-14 09:38:31.906385] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.703 [2024-07-14 09:38:31.906406] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9519c0, cid 5, qid 0 00:28:47.703 [2024-07-14 09:38:31.906556] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.703 [2024-07-14 09:38:31.906569] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.703 [2024-07-14 09:38:31.906576] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.703 [2024-07-14 09:38:31.906582] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9519c0) on tqpair=0x8faae0 00:28:47.703 [2024-07-14 09:38:31.906606] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.703 [2024-07-14 09:38:31.906617] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8faae0) 00:28:47.703 [2024-07-14 09:38:31.906629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.703 [2024-07-14 09:38:31.906641] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.703 [2024-07-14 09:38:31.906648] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8faae0) 00:28:47.703 [2024-07-14 09:38:31.906658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.703 [2024-07-14 09:38:31.906670] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.703 [2024-07-14 09:38:31.906678] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x8faae0) 00:28:47.703 [2024-07-14 09:38:31.906702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.703 [2024-07-14 09:38:31.906714] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.703 [2024-07-14 09:38:31.906721] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x8faae0) 00:28:47.703 [2024-07-14 09:38:31.906730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.704 [2024-07-14 09:38:31.906751] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9519c0, cid 5, qid 0 00:28:47.704 [2024-07-14 09:38:31.906777] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x951840, cid 4, qid 0 00:28:47.704 [2024-07-14 09:38:31.906785] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x951b40, cid 6, qid 0 00:28:47.704 [2024-07-14 09:38:31.906793] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x951cc0, cid 7, qid 0 00:28:47.704 [2024-07-14 09:38:31.907071] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:47.704 [2024-07-14 09:38:31.907085] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:47.704 [2024-07-14 09:38:31.907092] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:47.704 [2024-07-14 09:38:31.907098] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8faae0): datao=0, datal=8192, cccid=5 00:28:47.704 [2024-07-14 09:38:31.907106] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9519c0) on tqpair(0x8faae0): expected_datao=0, payload_size=8192 00:28:47.704 [2024-07-14 09:38:31.907113] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.704 [2024-07-14 09:38:31.907254] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:47.704 [2024-07-14 09:38:31.907265] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:47.704 [2024-07-14 09:38:31.907273] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:47.704 [2024-07-14 09:38:31.907282] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:47.704 [2024-07-14 09:38:31.907292] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:47.704 [2024-07-14 09:38:31.907299] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8faae0): datao=0, datal=512, cccid=4 00:28:47.704 [2024-07-14 09:38:31.907306] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x951840) on tqpair(0x8faae0): expected_datao=0, payload_size=512 00:28:47.704 [2024-07-14 09:38:31.907314] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.704 [2024-07-14 09:38:31.907323] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:47.704 [2024-07-14 09:38:31.907330] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:47.704 [2024-07-14 09:38:31.907338] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:47.704 [2024-07-14 09:38:31.907347] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:47.704 [2024-07-14 09:38:31.907353] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:47.704 [2024-07-14 09:38:31.907359] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8faae0): datao=0, datal=512, cccid=6 00:28:47.704 [2024-07-14 09:38:31.907367] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x951b40) on tqpair(0x8faae0): expected_datao=0, payload_size=512 00:28:47.704 [2024-07-14 09:38:31.907374] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.704 [2024-07-14 09:38:31.907383] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:47.704 [2024-07-14 09:38:31.907390] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:47.704 [2024-07-14 09:38:31.907398] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:47.704 [2024-07-14 09:38:31.907407] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:47.704 [2024-07-14 09:38:31.907413] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:47.704 [2024-07-14 09:38:31.907419] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8faae0): datao=0, datal=4096, cccid=7 00:28:47.704 [2024-07-14 09:38:31.907427] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x951cc0) on tqpair(0x8faae0): expected_datao=0, payload_size=4096 00:28:47.704 [2024-07-14 09:38:31.907434] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.704 [2024-07-14 09:38:31.907443] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:47.704 [2024-07-14 09:38:31.907450] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:47.704 [2024-07-14 09:38:31.907462] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.704 [2024-07-14 09:38:31.907471] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.704 [2024-07-14 09:38:31.907477] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.704 [2024-07-14 09:38:31.907484] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9519c0) on tqpair=0x8faae0 00:28:47.704 [2024-07-14 09:38:31.907502] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.704 [2024-07-14 09:38:31.907513] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.704 [2024-07-14 09:38:31.907519] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.704 [2024-07-14 09:38:31.907526] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x951840) on tqpair=0x8faae0 00:28:47.704 [2024-07-14 09:38:31.907556] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.704 [2024-07-14 09:38:31.907566] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.704 [2024-07-14 09:38:31.907572] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.704 [2024-07-14 09:38:31.907579] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x951b40) on tqpair=0x8faae0 00:28:47.704 [2024-07-14 09:38:31.907589] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.704 [2024-07-14 09:38:31.907598] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.704 [2024-07-14 09:38:31.907604] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.704 [2024-07-14 09:38:31.907610] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x951cc0) on tqpair=0x8faae0 00:28:47.704 ===================================================== 00:28:47.704 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:47.704 ===================================================== 00:28:47.704 Controller Capabilities/Features 00:28:47.704 ================================ 00:28:47.704 Vendor ID: 8086 00:28:47.704 Subsystem Vendor ID: 8086 00:28:47.704 Serial Number: SPDK00000000000001 00:28:47.704 Model Number: SPDK bdev Controller 00:28:47.704 Firmware Version: 24.09 00:28:47.704 Recommended Arb Burst: 6 00:28:47.704 IEEE OUI Identifier: e4 d2 5c 00:28:47.704 Multi-path I/O 00:28:47.704 May have multiple subsystem ports: Yes 00:28:47.704 May have multiple controllers: Yes 00:28:47.704 Associated with SR-IOV VF: No 00:28:47.704 Max Data Transfer Size: 131072 00:28:47.704 Max Number of Namespaces: 32 00:28:47.704 Max Number of I/O Queues: 127 00:28:47.704 NVMe Specification Version (VS): 1.3 00:28:47.704 NVMe Specification Version (Identify): 1.3 00:28:47.704 Maximum Queue Entries: 128 00:28:47.704 Contiguous Queues Required: Yes 00:28:47.704 Arbitration Mechanisms Supported 00:28:47.704 Weighted Round Robin: Not Supported 00:28:47.704 Vendor Specific: Not Supported 00:28:47.704 Reset Timeout: 15000 ms 00:28:47.704 Doorbell Stride: 4 bytes 00:28:47.704 NVM Subsystem Reset: Not Supported 00:28:47.704 Command Sets Supported 00:28:47.704 NVM Command Set: Supported 00:28:47.704 Boot Partition: Not Supported 00:28:47.704 Memory Page Size Minimum: 4096 bytes 00:28:47.704 Memory Page Size Maximum: 4096 bytes 00:28:47.704 Persistent Memory Region: Not Supported 00:28:47.704 Optional Asynchronous Events Supported 00:28:47.704 Namespace Attribute Notices: Supported 00:28:47.704 Firmware Activation Notices: Not Supported 00:28:47.704 ANA Change Notices: Not Supported 00:28:47.704 PLE Aggregate Log Change Notices: Not Supported 00:28:47.704 LBA Status Info Alert Notices: Not Supported 00:28:47.704 EGE Aggregate Log Change Notices: Not Supported 00:28:47.704 Normal NVM Subsystem Shutdown event: Not Supported 00:28:47.704 Zone Descriptor Change Notices: Not Supported 00:28:47.704 Discovery Log Change Notices: Not Supported 00:28:47.704 Controller Attributes 00:28:47.704 128-bit Host Identifier: Supported 00:28:47.704 Non-Operational Permissive Mode: Not Supported 00:28:47.704 NVM Sets: Not Supported 00:28:47.704 Read Recovery Levels: Not Supported 00:28:47.704 Endurance Groups: Not Supported 00:28:47.704 Predictable Latency Mode: Not Supported 00:28:47.704 Traffic Based Keep ALive: Not Supported 00:28:47.704 Namespace Granularity: Not Supported 00:28:47.704 SQ Associations: Not Supported 00:28:47.704 UUID List: Not Supported 00:28:47.704 Multi-Domain Subsystem: Not Supported 00:28:47.704 Fixed Capacity Management: Not Supported 00:28:47.704 Variable Capacity Management: Not Supported 00:28:47.704 Delete Endurance Group: Not Supported 00:28:47.704 Delete NVM Set: Not Supported 00:28:47.704 Extended LBA Formats Supported: Not Supported 00:28:47.704 Flexible Data Placement Supported: Not Supported 00:28:47.704 00:28:47.704 Controller Memory Buffer Support 00:28:47.704 ================================ 00:28:47.704 Supported: No 00:28:47.704 00:28:47.704 Persistent Memory Region Support 00:28:47.704 ================================ 00:28:47.704 Supported: No 00:28:47.704 00:28:47.704 Admin Command Set Attributes 00:28:47.704 ============================ 00:28:47.704 Security Send/Receive: Not Supported 00:28:47.704 Format NVM: Not Supported 00:28:47.704 Firmware Activate/Download: Not Supported 00:28:47.704 Namespace Management: Not Supported 00:28:47.704 Device Self-Test: Not Supported 00:28:47.704 Directives: Not Supported 00:28:47.704 NVMe-MI: Not Supported 00:28:47.704 Virtualization Management: Not Supported 00:28:47.704 Doorbell Buffer Config: Not Supported 00:28:47.704 Get LBA Status Capability: Not Supported 00:28:47.704 Command & Feature Lockdown Capability: Not Supported 00:28:47.704 Abort Command Limit: 4 00:28:47.704 Async Event Request Limit: 4 00:28:47.704 Number of Firmware Slots: N/A 00:28:47.704 Firmware Slot 1 Read-Only: N/A 00:28:47.704 Firmware Activation Without Reset: N/A 00:28:47.704 Multiple Update Detection Support: N/A 00:28:47.704 Firmware Update Granularity: No Information Provided 00:28:47.704 Per-Namespace SMART Log: No 00:28:47.704 Asymmetric Namespace Access Log Page: Not Supported 00:28:47.704 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:28:47.704 Command Effects Log Page: Supported 00:28:47.704 Get Log Page Extended Data: Supported 00:28:47.704 Telemetry Log Pages: Not Supported 00:28:47.704 Persistent Event Log Pages: Not Supported 00:28:47.704 Supported Log Pages Log Page: May Support 00:28:47.704 Commands Supported & Effects Log Page: Not Supported 00:28:47.704 Feature Identifiers & Effects Log Page:May Support 00:28:47.704 NVMe-MI Commands & Effects Log Page: May Support 00:28:47.704 Data Area 4 for Telemetry Log: Not Supported 00:28:47.704 Error Log Page Entries Supported: 128 00:28:47.704 Keep Alive: Supported 00:28:47.704 Keep Alive Granularity: 10000 ms 00:28:47.704 00:28:47.704 NVM Command Set Attributes 00:28:47.704 ========================== 00:28:47.704 Submission Queue Entry Size 00:28:47.704 Max: 64 00:28:47.704 Min: 64 00:28:47.704 Completion Queue Entry Size 00:28:47.705 Max: 16 00:28:47.705 Min: 16 00:28:47.705 Number of Namespaces: 32 00:28:47.705 Compare Command: Supported 00:28:47.705 Write Uncorrectable Command: Not Supported 00:28:47.705 Dataset Management Command: Supported 00:28:47.705 Write Zeroes Command: Supported 00:28:47.705 Set Features Save Field: Not Supported 00:28:47.705 Reservations: Supported 00:28:47.705 Timestamp: Not Supported 00:28:47.705 Copy: Supported 00:28:47.705 Volatile Write Cache: Present 00:28:47.705 Atomic Write Unit (Normal): 1 00:28:47.705 Atomic Write Unit (PFail): 1 00:28:47.705 Atomic Compare & Write Unit: 1 00:28:47.705 Fused Compare & Write: Supported 00:28:47.705 Scatter-Gather List 00:28:47.705 SGL Command Set: Supported 00:28:47.705 SGL Keyed: Supported 00:28:47.705 SGL Bit Bucket Descriptor: Not Supported 00:28:47.705 SGL Metadata Pointer: Not Supported 00:28:47.705 Oversized SGL: Not Supported 00:28:47.705 SGL Metadata Address: Not Supported 00:28:47.705 SGL Offset: Supported 00:28:47.705 Transport SGL Data Block: Not Supported 00:28:47.705 Replay Protected Memory Block: Not Supported 00:28:47.705 00:28:47.705 Firmware Slot Information 00:28:47.705 ========================= 00:28:47.705 Active slot: 1 00:28:47.705 Slot 1 Firmware Revision: 24.09 00:28:47.705 00:28:47.705 00:28:47.705 Commands Supported and Effects 00:28:47.705 ============================== 00:28:47.705 Admin Commands 00:28:47.705 -------------- 00:28:47.705 Get Log Page (02h): Supported 00:28:47.705 Identify (06h): Supported 00:28:47.705 Abort (08h): Supported 00:28:47.705 Set Features (09h): Supported 00:28:47.705 Get Features (0Ah): Supported 00:28:47.705 Asynchronous Event Request (0Ch): Supported 00:28:47.705 Keep Alive (18h): Supported 00:28:47.705 I/O Commands 00:28:47.705 ------------ 00:28:47.705 Flush (00h): Supported LBA-Change 00:28:47.705 Write (01h): Supported LBA-Change 00:28:47.705 Read (02h): Supported 00:28:47.705 Compare (05h): Supported 00:28:47.705 Write Zeroes (08h): Supported LBA-Change 00:28:47.705 Dataset Management (09h): Supported LBA-Change 00:28:47.705 Copy (19h): Supported LBA-Change 00:28:47.705 00:28:47.705 Error Log 00:28:47.705 ========= 00:28:47.705 00:28:47.705 Arbitration 00:28:47.705 =========== 00:28:47.705 Arbitration Burst: 1 00:28:47.705 00:28:47.705 Power Management 00:28:47.705 ================ 00:28:47.705 Number of Power States: 1 00:28:47.705 Current Power State: Power State #0 00:28:47.705 Power State #0: 00:28:47.705 Max Power: 0.00 W 00:28:47.705 Non-Operational State: Operational 00:28:47.705 Entry Latency: Not Reported 00:28:47.705 Exit Latency: Not Reported 00:28:47.705 Relative Read Throughput: 0 00:28:47.705 Relative Read Latency: 0 00:28:47.705 Relative Write Throughput: 0 00:28:47.705 Relative Write Latency: 0 00:28:47.705 Idle Power: Not Reported 00:28:47.705 Active Power: Not Reported 00:28:47.705 Non-Operational Permissive Mode: Not Supported 00:28:47.705 00:28:47.705 Health Information 00:28:47.705 ================== 00:28:47.705 Critical Warnings: 00:28:47.705 Available Spare Space: OK 00:28:47.705 Temperature: OK 00:28:47.705 Device Reliability: OK 00:28:47.705 Read Only: No 00:28:47.705 Volatile Memory Backup: OK 00:28:47.705 Current Temperature: 0 Kelvin (-273 Celsius) 00:28:47.705 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:28:47.705 Available Spare: 0% 00:28:47.705 Available Spare Threshold: 0% 00:28:47.705 Life Percentage Used:[2024-07-14 09:38:31.907721] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.705 [2024-07-14 09:38:31.907735] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x8faae0) 00:28:47.705 [2024-07-14 09:38:31.907746] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.705 [2024-07-14 09:38:31.907768] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x951cc0, cid 7, qid 0 00:28:47.705 [2024-07-14 09:38:31.911882] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.705 [2024-07-14 09:38:31.911899] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.705 [2024-07-14 09:38:31.911906] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.705 [2024-07-14 09:38:31.911913] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x951cc0) on tqpair=0x8faae0 00:28:47.705 [2024-07-14 09:38:31.911959] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:28:47.705 [2024-07-14 09:38:31.911979] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x951240) on tqpair=0x8faae0 00:28:47.705 [2024-07-14 09:38:31.911989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.705 [2024-07-14 09:38:31.911998] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9513c0) on tqpair=0x8faae0 00:28:47.705 [2024-07-14 09:38:31.912006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.705 [2024-07-14 09:38:31.912014] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x951540) on tqpair=0x8faae0 00:28:47.705 [2024-07-14 09:38:31.912022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.705 [2024-07-14 09:38:31.912030] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9516c0) on tqpair=0x8faae0 00:28:47.705 [2024-07-14 09:38:31.912038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.705 [2024-07-14 09:38:31.912050] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.705 [2024-07-14 09:38:31.912058] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.705 [2024-07-14 09:38:31.912065] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8faae0) 00:28:47.705 [2024-07-14 09:38:31.912075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.705 [2024-07-14 09:38:31.912098] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9516c0, cid 3, qid 0 00:28:47.705 [2024-07-14 09:38:31.912273] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.705 [2024-07-14 09:38:31.912286] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.705 [2024-07-14 09:38:31.912292] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.705 [2024-07-14 09:38:31.912299] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9516c0) on tqpair=0x8faae0 00:28:47.705 [2024-07-14 09:38:31.912311] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.705 [2024-07-14 09:38:31.912319] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.705 [2024-07-14 09:38:31.912325] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8faae0) 00:28:47.705 [2024-07-14 09:38:31.912336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.705 [2024-07-14 09:38:31.912361] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9516c0, cid 3, qid 0 00:28:47.705 [2024-07-14 09:38:31.912529] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.705 [2024-07-14 09:38:31.912544] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.705 [2024-07-14 09:38:31.912551] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.705 [2024-07-14 09:38:31.912558] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9516c0) on tqpair=0x8faae0 00:28:47.705 [2024-07-14 09:38:31.912565] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:28:47.705 [2024-07-14 09:38:31.912577] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:28:47.705 [2024-07-14 09:38:31.912593] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.705 [2024-07-14 09:38:31.912603] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.705 [2024-07-14 09:38:31.912609] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8faae0) 00:28:47.705 [2024-07-14 09:38:31.912620] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.705 [2024-07-14 09:38:31.912640] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9516c0, cid 3, qid 0 00:28:47.705 [2024-07-14 09:38:31.912901] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.705 [2024-07-14 09:38:31.912915] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.705 [2024-07-14 09:38:31.912922] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.705 [2024-07-14 09:38:31.912929] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9516c0) on tqpair=0x8faae0 00:28:47.706 [2024-07-14 09:38:31.912945] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.706 [2024-07-14 09:38:31.912955] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.706 [2024-07-14 09:38:31.912961] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8faae0) 00:28:47.706 [2024-07-14 09:38:31.912972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.706 [2024-07-14 09:38:31.912992] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9516c0, cid 3, qid 0 00:28:47.706 [2024-07-14 09:38:31.913143] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.706 [2024-07-14 09:38:31.913158] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.706 [2024-07-14 09:38:31.913165] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.706 [2024-07-14 09:38:31.913172] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9516c0) on tqpair=0x8faae0 00:28:47.706 [2024-07-14 09:38:31.913188] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.706 [2024-07-14 09:38:31.913197] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.706 [2024-07-14 09:38:31.913204] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8faae0) 00:28:47.706 [2024-07-14 09:38:31.913214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.706 [2024-07-14 09:38:31.913240] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9516c0, cid 3, qid 0 00:28:47.706 [2024-07-14 09:38:31.916891] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.706 [2024-07-14 09:38:31.916908] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.706 [2024-07-14 09:38:31.916915] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.706 [2024-07-14 09:38:31.916922] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9516c0) on tqpair=0x8faae0 00:28:47.706 [2024-07-14 09:38:31.916954] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.706 [2024-07-14 09:38:31.916964] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.706 [2024-07-14 09:38:31.916971] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8faae0) 00:28:47.706 [2024-07-14 09:38:31.916982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.706 [2024-07-14 09:38:31.917004] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9516c0, cid 3, qid 0 00:28:47.706 [2024-07-14 09:38:31.917190] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.706 [2024-07-14 09:38:31.917205] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.706 [2024-07-14 09:38:31.917212] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.706 [2024-07-14 09:38:31.917223] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9516c0) on tqpair=0x8faae0 00:28:47.706 [2024-07-14 09:38:31.917237] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:28:47.706 0% 00:28:47.706 Data Units Read: 0 00:28:47.706 Data Units Written: 0 00:28:47.706 Host Read Commands: 0 00:28:47.706 Host Write Commands: 0 00:28:47.706 Controller Busy Time: 0 minutes 00:28:47.706 Power Cycles: 0 00:28:47.706 Power On Hours: 0 hours 00:28:47.706 Unsafe Shutdowns: 0 00:28:47.706 Unrecoverable Media Errors: 0 00:28:47.706 Lifetime Error Log Entries: 0 00:28:47.706 Warning Temperature Time: 0 minutes 00:28:47.706 Critical Temperature Time: 0 minutes 00:28:47.706 00:28:47.706 Number of Queues 00:28:47.706 ================ 00:28:47.706 Number of I/O Submission Queues: 127 00:28:47.706 Number of I/O Completion Queues: 127 00:28:47.706 00:28:47.706 Active Namespaces 00:28:47.706 ================= 00:28:47.706 Namespace ID:1 00:28:47.706 Error Recovery Timeout: Unlimited 00:28:47.706 Command Set Identifier: NVM (00h) 00:28:47.706 Deallocate: Supported 00:28:47.706 Deallocated/Unwritten Error: Not Supported 00:28:47.706 Deallocated Read Value: Unknown 00:28:47.706 Deallocate in Write Zeroes: Not Supported 00:28:47.706 Deallocated Guard Field: 0xFFFF 00:28:47.706 Flush: Supported 00:28:47.706 Reservation: Supported 00:28:47.706 Namespace Sharing Capabilities: Multiple Controllers 00:28:47.706 Size (in LBAs): 131072 (0GiB) 00:28:47.706 Capacity (in LBAs): 131072 (0GiB) 00:28:47.706 Utilization (in LBAs): 131072 (0GiB) 00:28:47.706 NGUID: ABCDEF0123456789ABCDEF0123456789 00:28:47.706 EUI64: ABCDEF0123456789 00:28:47.706 UUID: 6c9237a2-7ffc-4fe8-a776-83fc1be08c13 00:28:47.706 Thin Provisioning: Not Supported 00:28:47.706 Per-NS Atomic Units: Yes 00:28:47.706 Atomic Boundary Size (Normal): 0 00:28:47.706 Atomic Boundary Size (PFail): 0 00:28:47.706 Atomic Boundary Offset: 0 00:28:47.706 Maximum Single Source Range Length: 65535 00:28:47.706 Maximum Copy Length: 65535 00:28:47.706 Maximum Source Range Count: 1 00:28:47.706 NGUID/EUI64 Never Reused: No 00:28:47.706 Namespace Write Protected: No 00:28:47.706 Number of LBA Formats: 1 00:28:47.706 Current LBA Format: LBA Format #00 00:28:47.706 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:47.706 00:28:47.706 09:38:31 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:28:47.706 09:38:31 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:47.706 09:38:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.706 09:38:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:47.706 09:38:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.706 09:38:31 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:28:47.706 09:38:31 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:28:47.706 09:38:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:47.706 09:38:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:28:47.706 09:38:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:47.706 09:38:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:28:47.706 09:38:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:47.706 09:38:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:47.706 rmmod nvme_tcp 00:28:47.706 rmmod nvme_fabrics 00:28:47.706 rmmod nvme_keyring 00:28:47.706 09:38:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:47.706 09:38:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:28:47.706 09:38:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:28:47.706 09:38:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 835487 ']' 00:28:47.706 09:38:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 835487 00:28:47.706 09:38:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 835487 ']' 00:28:47.706 09:38:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 835487 00:28:47.706 09:38:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:28:47.706 09:38:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:47.706 09:38:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 835487 00:28:47.706 09:38:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:47.706 09:38:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:47.706 09:38:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 835487' 00:28:47.706 killing process with pid 835487 00:28:47.706 09:38:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 835487 00:28:47.706 09:38:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 835487 00:28:47.965 09:38:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:47.965 09:38:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:47.965 09:38:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:47.965 09:38:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:47.965 09:38:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:47.965 09:38:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:47.965 09:38:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:47.965 09:38:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:49.867 09:38:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:49.867 00:28:49.867 real 0m5.344s 00:28:49.867 user 0m4.223s 00:28:49.867 sys 0m1.841s 00:28:49.867 09:38:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:49.867 09:38:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:49.867 ************************************ 00:28:49.867 END TEST nvmf_identify 00:28:49.867 ************************************ 00:28:50.126 09:38:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:50.126 09:38:34 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:50.126 09:38:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:50.126 09:38:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:50.126 09:38:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:50.126 ************************************ 00:28:50.126 START TEST nvmf_perf 00:28:50.126 ************************************ 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:50.126 * Looking for test storage... 00:28:50.126 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:50.126 09:38:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:52.028 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:52.028 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:52.028 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:52.028 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:52.028 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:52.028 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:28:52.028 00:28:52.028 --- 10.0.0.2 ping statistics --- 00:28:52.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:52.028 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:52.028 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:52.028 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:28:52.028 00:28:52.028 --- 10.0.0.1 ping statistics --- 00:28:52.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:52.028 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:52.028 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:52.287 09:38:36 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:28:52.287 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:52.287 09:38:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:52.287 09:38:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:52.287 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=837560 00:28:52.287 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:52.287 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 837560 00:28:52.287 09:38:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 837560 ']' 00:28:52.287 09:38:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:52.287 09:38:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:52.287 09:38:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:52.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:52.287 09:38:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:52.287 09:38:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:52.287 [2024-07-14 09:38:36.537085] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:28:52.287 [2024-07-14 09:38:36.537176] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:52.287 EAL: No free 2048 kB hugepages reported on node 1 00:28:52.287 [2024-07-14 09:38:36.607587] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:52.287 [2024-07-14 09:38:36.701386] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:52.287 [2024-07-14 09:38:36.701455] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:52.287 [2024-07-14 09:38:36.701472] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:52.287 [2024-07-14 09:38:36.701486] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:52.287 [2024-07-14 09:38:36.701505] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:52.287 [2024-07-14 09:38:36.704894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:52.287 [2024-07-14 09:38:36.704958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:52.287 [2024-07-14 09:38:36.705013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:52.287 [2024-07-14 09:38:36.705017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:52.545 09:38:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:52.545 09:38:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:28:52.545 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:52.545 09:38:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:52.545 09:38:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:52.545 09:38:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:52.545 09:38:36 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:52.545 09:38:36 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:55.821 09:38:39 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:28:55.821 09:38:39 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:55.821 09:38:40 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:28:55.821 09:38:40 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:56.077 09:38:40 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:56.077 09:38:40 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:28:56.077 09:38:40 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:56.077 09:38:40 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:28:56.077 09:38:40 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:56.334 [2024-07-14 09:38:40.684919] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:56.334 09:38:40 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:56.591 09:38:40 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:56.591 09:38:40 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:56.849 09:38:41 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:56.849 09:38:41 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:57.106 09:38:41 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:57.363 [2024-07-14 09:38:41.676515] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:57.363 09:38:41 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:57.621 09:38:41 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:28:57.621 09:38:41 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:57.621 09:38:41 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:28:57.621 09:38:41 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:58.992 Initializing NVMe Controllers 00:28:58.992 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:28:58.992 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:28:58.992 Initialization complete. Launching workers. 00:28:58.992 ======================================================== 00:28:58.992 Latency(us) 00:28:58.992 Device Information : IOPS MiB/s Average min max 00:28:58.992 PCIE (0000:88:00.0) NSID 1 from core 0: 85691.37 334.73 372.92 33.87 6272.48 00:28:58.992 ======================================================== 00:28:58.992 Total : 85691.37 334.73 372.92 33.87 6272.48 00:28:58.992 00:28:58.992 09:38:43 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:58.992 EAL: No free 2048 kB hugepages reported on node 1 00:29:00.362 Initializing NVMe Controllers 00:29:00.362 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:00.362 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:00.362 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:00.362 Initialization complete. Launching workers. 00:29:00.362 ======================================================== 00:29:00.362 Latency(us) 00:29:00.362 Device Information : IOPS MiB/s Average min max 00:29:00.362 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 136.00 0.53 7466.61 308.48 45754.83 00:29:00.362 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 70.00 0.27 14848.46 7221.37 47924.97 00:29:00.362 ======================================================== 00:29:00.362 Total : 206.00 0.80 9975.01 308.48 47924.97 00:29:00.362 00:29:00.362 09:38:44 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:00.362 EAL: No free 2048 kB hugepages reported on node 1 00:29:01.736 Initializing NVMe Controllers 00:29:01.736 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:01.736 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:01.736 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:01.736 Initialization complete. Launching workers. 00:29:01.736 ======================================================== 00:29:01.736 Latency(us) 00:29:01.736 Device Information : IOPS MiB/s Average min max 00:29:01.736 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8237.46 32.18 3883.63 495.80 10742.40 00:29:01.736 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3725.33 14.55 8590.62 6890.34 20197.36 00:29:01.736 ======================================================== 00:29:01.736 Total : 11962.79 46.73 5349.43 495.80 20197.36 00:29:01.736 00:29:01.736 09:38:45 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:29:01.736 09:38:45 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:29:01.736 09:38:45 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:01.736 EAL: No free 2048 kB hugepages reported on node 1 00:29:04.291 Initializing NVMe Controllers 00:29:04.291 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:04.291 Controller IO queue size 128, less than required. 00:29:04.291 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:04.291 Controller IO queue size 128, less than required. 00:29:04.291 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:04.291 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:04.291 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:04.291 Initialization complete. Launching workers. 00:29:04.291 ======================================================== 00:29:04.291 Latency(us) 00:29:04.291 Device Information : IOPS MiB/s Average min max 00:29:04.291 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 727.71 181.93 182404.67 100847.33 234005.02 00:29:04.291 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 568.38 142.09 232393.79 94954.85 374803.09 00:29:04.291 ======================================================== 00:29:04.291 Total : 1296.08 324.02 204326.68 94954.85 374803.09 00:29:04.291 00:29:04.291 09:38:48 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:29:04.291 EAL: No free 2048 kB hugepages reported on node 1 00:29:04.549 No valid NVMe controllers or AIO or URING devices found 00:29:04.549 Initializing NVMe Controllers 00:29:04.549 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:04.549 Controller IO queue size 128, less than required. 00:29:04.549 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:04.549 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:29:04.549 Controller IO queue size 128, less than required. 00:29:04.549 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:04.549 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:29:04.549 WARNING: Some requested NVMe devices were skipped 00:29:04.549 09:38:48 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:29:04.549 EAL: No free 2048 kB hugepages reported on node 1 00:29:07.079 Initializing NVMe Controllers 00:29:07.079 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:07.079 Controller IO queue size 128, less than required. 00:29:07.079 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:07.079 Controller IO queue size 128, less than required. 00:29:07.079 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:07.079 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:07.079 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:07.079 Initialization complete. Launching workers. 00:29:07.079 00:29:07.079 ==================== 00:29:07.079 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:29:07.079 TCP transport: 00:29:07.079 polls: 36114 00:29:07.079 idle_polls: 11676 00:29:07.079 sock_completions: 24438 00:29:07.079 nvme_completions: 3455 00:29:07.079 submitted_requests: 5170 00:29:07.079 queued_requests: 1 00:29:07.079 00:29:07.079 ==================== 00:29:07.079 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:29:07.079 TCP transport: 00:29:07.079 polls: 35058 00:29:07.079 idle_polls: 11551 00:29:07.079 sock_completions: 23507 00:29:07.079 nvme_completions: 3457 00:29:07.079 submitted_requests: 5240 00:29:07.079 queued_requests: 1 00:29:07.079 ======================================================== 00:29:07.079 Latency(us) 00:29:07.079 Device Information : IOPS MiB/s Average min max 00:29:07.079 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 863.49 215.87 152406.94 89588.09 237315.24 00:29:07.079 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 863.99 216.00 151261.53 52721.41 215726.46 00:29:07.079 ======================================================== 00:29:07.079 Total : 1727.49 431.87 151834.07 52721.41 237315.24 00:29:07.079 00:29:07.337 09:38:51 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:29:07.337 09:38:51 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:07.337 09:38:51 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:29:07.337 09:38:51 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:29:07.337 09:38:51 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:29:10.613 09:38:54 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=3236935a-27b4-4ad9-a3dd-74fa8ca2b31d 00:29:10.613 09:38:54 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 3236935a-27b4-4ad9-a3dd-74fa8ca2b31d 00:29:10.613 09:38:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=3236935a-27b4-4ad9-a3dd-74fa8ca2b31d 00:29:10.613 09:38:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:10.613 09:38:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:29:10.613 09:38:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:29:10.613 09:38:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:10.870 09:38:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:10.870 { 00:29:10.870 "uuid": "3236935a-27b4-4ad9-a3dd-74fa8ca2b31d", 00:29:10.870 "name": "lvs_0", 00:29:10.870 "base_bdev": "Nvme0n1", 00:29:10.870 "total_data_clusters": 238234, 00:29:10.870 "free_clusters": 238234, 00:29:10.870 "block_size": 512, 00:29:10.870 "cluster_size": 4194304 00:29:10.870 } 00:29:10.870 ]' 00:29:10.870 09:38:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="3236935a-27b4-4ad9-a3dd-74fa8ca2b31d") .free_clusters' 00:29:10.870 09:38:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:29:10.870 09:38:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="3236935a-27b4-4ad9-a3dd-74fa8ca2b31d") .cluster_size' 00:29:10.870 09:38:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:29:10.870 09:38:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:29:10.870 09:38:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:29:10.870 952936 00:29:10.870 09:38:55 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:29:10.870 09:38:55 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:29:10.870 09:38:55 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3236935a-27b4-4ad9-a3dd-74fa8ca2b31d lbd_0 20480 00:29:11.802 09:38:55 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=e07d8252-7f2a-4ec7-8fbb-e489246e2bb7 00:29:11.802 09:38:55 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore e07d8252-7f2a-4ec7-8fbb-e489246e2bb7 lvs_n_0 00:29:12.367 09:38:56 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=5e368e72-b2a5-43bd-b50f-94e1cea2d91f 00:29:12.367 09:38:56 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 5e368e72-b2a5-43bd-b50f-94e1cea2d91f 00:29:12.367 09:38:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=5e368e72-b2a5-43bd-b50f-94e1cea2d91f 00:29:12.367 09:38:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:12.367 09:38:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:29:12.367 09:38:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:29:12.367 09:38:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:12.624 09:38:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:12.624 { 00:29:12.624 "uuid": "3236935a-27b4-4ad9-a3dd-74fa8ca2b31d", 00:29:12.624 "name": "lvs_0", 00:29:12.624 "base_bdev": "Nvme0n1", 00:29:12.624 "total_data_clusters": 238234, 00:29:12.624 "free_clusters": 233114, 00:29:12.624 "block_size": 512, 00:29:12.624 "cluster_size": 4194304 00:29:12.624 }, 00:29:12.624 { 00:29:12.624 "uuid": "5e368e72-b2a5-43bd-b50f-94e1cea2d91f", 00:29:12.624 "name": "lvs_n_0", 00:29:12.624 "base_bdev": "e07d8252-7f2a-4ec7-8fbb-e489246e2bb7", 00:29:12.624 "total_data_clusters": 5114, 00:29:12.624 "free_clusters": 5114, 00:29:12.624 "block_size": 512, 00:29:12.624 "cluster_size": 4194304 00:29:12.624 } 00:29:12.624 ]' 00:29:12.624 09:38:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="5e368e72-b2a5-43bd-b50f-94e1cea2d91f") .free_clusters' 00:29:12.624 09:38:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:29:12.624 09:38:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="5e368e72-b2a5-43bd-b50f-94e1cea2d91f") .cluster_size' 00:29:12.624 09:38:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:29:12.624 09:38:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:29:12.624 09:38:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:29:12.624 20456 00:29:12.624 09:38:56 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:29:12.624 09:38:56 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5e368e72-b2a5-43bd-b50f-94e1cea2d91f lbd_nest_0 20456 00:29:12.882 09:38:57 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=79747843-039b-4955-bd69-f63793d33d4f 00:29:12.882 09:38:57 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:13.139 09:38:57 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:29:13.139 09:38:57 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 79747843-039b-4955-bd69-f63793d33d4f 00:29:13.396 09:38:57 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:13.653 09:38:57 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:29:13.653 09:38:57 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:29:13.653 09:38:57 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:13.653 09:38:57 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:13.653 09:38:57 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:13.653 EAL: No free 2048 kB hugepages reported on node 1 00:29:25.865 Initializing NVMe Controllers 00:29:25.865 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:25.865 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:25.865 Initialization complete. Launching workers. 00:29:25.865 ======================================================== 00:29:25.865 Latency(us) 00:29:25.865 Device Information : IOPS MiB/s Average min max 00:29:25.865 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 46.40 0.02 21631.80 250.62 46049.05 00:29:25.866 ======================================================== 00:29:25.866 Total : 46.40 0.02 21631.80 250.62 46049.05 00:29:25.866 00:29:25.866 09:39:08 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:25.866 09:39:08 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:25.866 EAL: No free 2048 kB hugepages reported on node 1 00:29:35.841 Initializing NVMe Controllers 00:29:35.841 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:35.841 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:35.841 Initialization complete. Launching workers. 00:29:35.841 ======================================================== 00:29:35.841 Latency(us) 00:29:35.841 Device Information : IOPS MiB/s Average min max 00:29:35.841 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 79.09 9.89 12653.71 6178.90 47895.17 00:29:35.842 ======================================================== 00:29:35.842 Total : 79.09 9.89 12653.71 6178.90 47895.17 00:29:35.842 00:29:35.842 09:39:18 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:35.842 09:39:18 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:35.842 09:39:18 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:35.842 EAL: No free 2048 kB hugepages reported on node 1 00:29:45.801 Initializing NVMe Controllers 00:29:45.801 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:45.801 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:45.801 Initialization complete. Launching workers. 00:29:45.801 ======================================================== 00:29:45.801 Latency(us) 00:29:45.801 Device Information : IOPS MiB/s Average min max 00:29:45.801 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6637.50 3.24 4821.36 350.47 12093.37 00:29:45.801 ======================================================== 00:29:45.801 Total : 6637.50 3.24 4821.36 350.47 12093.37 00:29:45.801 00:29:45.801 09:39:28 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:45.802 09:39:28 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:45.802 EAL: No free 2048 kB hugepages reported on node 1 00:29:55.764 Initializing NVMe Controllers 00:29:55.764 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:55.764 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:55.764 Initialization complete. Launching workers. 00:29:55.764 ======================================================== 00:29:55.764 Latency(us) 00:29:55.764 Device Information : IOPS MiB/s Average min max 00:29:55.764 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1650.42 206.30 19391.70 1444.09 39796.01 00:29:55.764 ======================================================== 00:29:55.764 Total : 1650.42 206.30 19391.70 1444.09 39796.01 00:29:55.764 00:29:55.764 09:39:39 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:55.764 09:39:39 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:55.764 09:39:39 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:55.764 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.727 Initializing NVMe Controllers 00:30:05.727 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:05.728 Controller IO queue size 128, less than required. 00:30:05.728 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:05.728 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:05.728 Initialization complete. Launching workers. 00:30:05.728 ======================================================== 00:30:05.728 Latency(us) 00:30:05.728 Device Information : IOPS MiB/s Average min max 00:30:05.728 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11908.42 5.81 10751.53 1711.21 23517.38 00:30:05.728 ======================================================== 00:30:05.728 Total : 11908.42 5.81 10751.53 1711.21 23517.38 00:30:05.728 00:30:05.728 09:39:49 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:05.728 09:39:49 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:05.728 EAL: No free 2048 kB hugepages reported on node 1 00:30:17.919 Initializing NVMe Controllers 00:30:17.919 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:17.919 Controller IO queue size 128, less than required. 00:30:17.919 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:17.919 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:17.919 Initialization complete. Launching workers. 00:30:17.919 ======================================================== 00:30:17.919 Latency(us) 00:30:17.919 Device Information : IOPS MiB/s Average min max 00:30:17.919 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1213.29 151.66 105645.23 21917.60 214285.46 00:30:17.919 ======================================================== 00:30:17.919 Total : 1213.29 151.66 105645.23 21917.60 214285.46 00:30:17.919 00:30:17.919 09:40:00 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:17.919 09:40:00 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 79747843-039b-4955-bd69-f63793d33d4f 00:30:17.919 09:40:01 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:17.919 09:40:01 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e07d8252-7f2a-4ec7-8fbb-e489246e2bb7 00:30:17.919 09:40:01 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:17.919 09:40:02 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:30:17.919 09:40:02 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:30:17.919 09:40:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:17.919 09:40:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:30:17.919 09:40:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:17.919 09:40:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:30:17.919 09:40:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:17.919 09:40:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:17.919 rmmod nvme_tcp 00:30:17.919 rmmod nvme_fabrics 00:30:17.919 rmmod nvme_keyring 00:30:17.919 09:40:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:17.919 09:40:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:30:17.919 09:40:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:30:17.919 09:40:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 837560 ']' 00:30:17.919 09:40:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 837560 00:30:17.919 09:40:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 837560 ']' 00:30:17.919 09:40:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 837560 00:30:17.920 09:40:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:30:17.920 09:40:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:17.920 09:40:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 837560 00:30:17.920 09:40:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:17.920 09:40:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:17.920 09:40:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 837560' 00:30:17.920 killing process with pid 837560 00:30:17.920 09:40:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 837560 00:30:17.920 09:40:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 837560 00:30:19.291 09:40:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:19.291 09:40:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:19.291 09:40:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:19.291 09:40:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:19.291 09:40:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:19.291 09:40:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:19.291 09:40:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:19.291 09:40:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:21.822 09:40:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:21.822 00:30:21.822 real 1m31.421s 00:30:21.822 user 5m36.926s 00:30:21.822 sys 0m15.527s 00:30:21.822 09:40:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:21.822 09:40:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:21.822 ************************************ 00:30:21.822 END TEST nvmf_perf 00:30:21.822 ************************************ 00:30:21.822 09:40:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:21.822 09:40:05 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:21.822 09:40:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:21.822 09:40:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:21.822 09:40:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:21.822 ************************************ 00:30:21.822 START TEST nvmf_fio_host 00:30:21.822 ************************************ 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:21.822 * Looking for test storage... 00:30:21.822 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:21.822 09:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:21.823 09:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:21.823 09:40:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:21.823 09:40:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:21.823 09:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:21.823 09:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:21.823 09:40:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:30:21.823 09:40:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:23.726 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:23.726 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:23.726 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:23.726 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:23.726 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:23.726 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:30:23.726 00:30:23.726 --- 10.0.0.2 ping statistics --- 00:30:23.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:23.726 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:23.726 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:23.726 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:30:23.726 00:30:23.726 --- 10.0.0.1 ping statistics --- 00:30:23.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:23.726 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:23.726 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:23.727 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:23.727 09:40:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:23.727 09:40:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:30:23.727 09:40:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:30:23.727 09:40:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:23.727 09:40:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.727 09:40:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=850164 00:30:23.727 09:40:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:23.727 09:40:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:23.727 09:40:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 850164 00:30:23.727 09:40:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 850164 ']' 00:30:23.727 09:40:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:23.727 09:40:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:23.727 09:40:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:23.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:23.727 09:40:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:23.727 09:40:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.727 [2024-07-14 09:40:07.980879] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:30:23.727 [2024-07-14 09:40:07.980961] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:23.727 EAL: No free 2048 kB hugepages reported on node 1 00:30:23.727 [2024-07-14 09:40:08.059920] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:23.727 [2024-07-14 09:40:08.156174] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:23.727 [2024-07-14 09:40:08.156242] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:23.727 [2024-07-14 09:40:08.156258] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:23.727 [2024-07-14 09:40:08.156272] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:23.727 [2024-07-14 09:40:08.156283] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:23.727 [2024-07-14 09:40:08.156366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:23.727 [2024-07-14 09:40:08.156423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:23.727 [2024-07-14 09:40:08.157890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:23.727 [2024-07-14 09:40:08.157895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:23.985 09:40:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:23.985 09:40:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:30:23.985 09:40:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:24.243 [2024-07-14 09:40:08.565525] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:24.243 09:40:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:30:24.243 09:40:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:24.243 09:40:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:24.243 09:40:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:30:24.501 Malloc1 00:30:24.501 09:40:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:24.758 09:40:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:25.015 09:40:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:25.273 [2024-07-14 09:40:09.607688] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:25.273 09:40:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:25.530 09:40:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:25.530 09:40:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:25.530 09:40:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:25.530 09:40:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:25.530 09:40:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:25.530 09:40:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:25.530 09:40:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:25.530 09:40:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:30:25.530 09:40:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:25.530 09:40:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:25.530 09:40:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:25.530 09:40:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:30:25.530 09:40:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:25.530 09:40:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:25.530 09:40:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:25.530 09:40:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:25.530 09:40:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:25.530 09:40:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:25.530 09:40:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:25.530 09:40:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:25.530 09:40:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:25.530 09:40:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:25.530 09:40:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:25.788 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:25.788 fio-3.35 00:30:25.788 Starting 1 thread 00:30:25.788 EAL: No free 2048 kB hugepages reported on node 1 00:30:28.309 00:30:28.309 test: (groupid=0, jobs=1): err= 0: pid=850519: Sun Jul 14 09:40:12 2024 00:30:28.309 read: IOPS=9010, BW=35.2MiB/s (36.9MB/s)(70.6MiB/2007msec) 00:30:28.309 slat (usec): min=2, max=171, avg= 2.83, stdev= 1.96 00:30:28.309 clat (usec): min=3430, max=13588, avg=7862.86, stdev=572.68 00:30:28.310 lat (usec): min=3460, max=13591, avg=7865.70, stdev=572.56 00:30:28.310 clat percentiles (usec): 00:30:28.310 | 1.00th=[ 6587], 5.00th=[ 6980], 10.00th=[ 7177], 20.00th=[ 7439], 00:30:28.310 | 30.00th=[ 7570], 40.00th=[ 7767], 50.00th=[ 7898], 60.00th=[ 8029], 00:30:28.310 | 70.00th=[ 8160], 80.00th=[ 8291], 90.00th=[ 8586], 95.00th=[ 8717], 00:30:28.310 | 99.00th=[ 9110], 99.50th=[ 9241], 99.90th=[10552], 99.95th=[12256], 00:30:28.310 | 99.99th=[13566] 00:30:28.310 bw ( KiB/s): min=35352, max=36632, per=99.93%, avg=36020.00, stdev=524.57, samples=4 00:30:28.310 iops : min= 8838, max= 9158, avg=9005.00, stdev=131.14, samples=4 00:30:28.310 write: IOPS=9027, BW=35.3MiB/s (37.0MB/s)(70.8MiB/2007msec); 0 zone resets 00:30:28.310 slat (usec): min=2, max=131, avg= 2.96, stdev= 1.40 00:30:28.310 clat (usec): min=1570, max=12371, avg=6292.39, stdev=510.78 00:30:28.310 lat (usec): min=1579, max=12374, avg=6295.35, stdev=510.72 00:30:28.310 clat percentiles (usec): 00:30:28.310 | 1.00th=[ 5145], 5.00th=[ 5538], 10.00th=[ 5735], 20.00th=[ 5932], 00:30:28.310 | 30.00th=[ 6063], 40.00th=[ 6194], 50.00th=[ 6325], 60.00th=[ 6390], 00:30:28.310 | 70.00th=[ 6521], 80.00th=[ 6652], 90.00th=[ 6849], 95.00th=[ 7046], 00:30:28.310 | 99.00th=[ 7373], 99.50th=[ 7570], 99.90th=[10028], 99.95th=[11338], 00:30:28.310 | 99.99th=[12256] 00:30:28.310 bw ( KiB/s): min=35952, max=36288, per=100.00%, avg=36134.00, stdev=151.42, samples=4 00:30:28.310 iops : min= 8988, max= 9072, avg=9033.50, stdev=37.85, samples=4 00:30:28.310 lat (msec) : 2=0.01%, 4=0.08%, 10=99.78%, 20=0.14% 00:30:28.310 cpu : usr=53.19%, sys=37.69%, ctx=70, majf=0, minf=32 00:30:28.310 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:28.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.310 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:28.310 issued rwts: total=18085,18119,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.310 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:28.310 00:30:28.310 Run status group 0 (all jobs): 00:30:28.310 READ: bw=35.2MiB/s (36.9MB/s), 35.2MiB/s-35.2MiB/s (36.9MB/s-36.9MB/s), io=70.6MiB (74.1MB), run=2007-2007msec 00:30:28.310 WRITE: bw=35.3MiB/s (37.0MB/s), 35.3MiB/s-35.3MiB/s (37.0MB/s-37.0MB/s), io=70.8MiB (74.2MB), run=2007-2007msec 00:30:28.310 09:40:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:28.310 09:40:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:28.310 09:40:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:28.310 09:40:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:28.310 09:40:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:28.310 09:40:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:28.310 09:40:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:30:28.310 09:40:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:28.310 09:40:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:28.310 09:40:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:28.310 09:40:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:30:28.310 09:40:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:28.310 09:40:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:28.310 09:40:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:28.310 09:40:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:28.310 09:40:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:28.310 09:40:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:28.310 09:40:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:28.310 09:40:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:28.310 09:40:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:28.310 09:40:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:28.310 09:40:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:28.310 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:30:28.310 fio-3.35 00:30:28.310 Starting 1 thread 00:30:28.310 EAL: No free 2048 kB hugepages reported on node 1 00:30:30.844 00:30:30.844 test: (groupid=0, jobs=1): err= 0: pid=850972: Sun Jul 14 09:40:15 2024 00:30:30.844 read: IOPS=7677, BW=120MiB/s (126MB/s)(241MiB/2008msec) 00:30:30.844 slat (nsec): min=2884, max=94460, avg=4045.99, stdev=1985.68 00:30:30.844 clat (usec): min=3629, max=56867, avg=10175.40, stdev=4468.25 00:30:30.844 lat (usec): min=3633, max=56871, avg=10179.45, stdev=4468.33 00:30:30.844 clat percentiles (usec): 00:30:30.844 | 1.00th=[ 5211], 5.00th=[ 6128], 10.00th=[ 6849], 20.00th=[ 7832], 00:30:30.844 | 30.00th=[ 8455], 40.00th=[ 9110], 50.00th=[ 9765], 60.00th=[10421], 00:30:30.844 | 70.00th=[10945], 80.00th=[11731], 90.00th=[13042], 95.00th=[14353], 00:30:30.844 | 99.00th=[17433], 99.50th=[50594], 99.90th=[55837], 99.95th=[56361], 00:30:30.844 | 99.99th=[56886] 00:30:30.844 bw ( KiB/s): min=50688, max=71232, per=50.81%, avg=62408.00, stdev=10084.02, samples=4 00:30:30.844 iops : min= 3168, max= 4452, avg=3900.50, stdev=630.25, samples=4 00:30:30.844 write: IOPS=4667, BW=72.9MiB/s (76.5MB/s)(128MiB/1752msec); 0 zone resets 00:30:30.844 slat (usec): min=31, max=155, avg=34.64, stdev= 5.70 00:30:30.844 clat (usec): min=7043, max=18477, avg=11308.32, stdev=1739.70 00:30:30.844 lat (usec): min=7078, max=18508, avg=11342.96, stdev=1739.86 00:30:30.844 clat percentiles (usec): 00:30:30.844 | 1.00th=[ 8160], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[ 9765], 00:30:30.844 | 30.00th=[10290], 40.00th=[10683], 50.00th=[11076], 60.00th=[11731], 00:30:30.844 | 70.00th=[12256], 80.00th=[12649], 90.00th=[13566], 95.00th=[14353], 00:30:30.844 | 99.00th=[16188], 99.50th=[16909], 99.90th=[17957], 99.95th=[18220], 00:30:30.844 | 99.99th=[18482] 00:30:30.844 bw ( KiB/s): min=52480, max=73792, per=86.76%, avg=64792.00, stdev=10709.29, samples=4 00:30:30.844 iops : min= 3280, max= 4612, avg=4049.50, stdev=669.33, samples=4 00:30:30.844 lat (msec) : 4=0.02%, 10=43.39%, 20=56.05%, 50=0.17%, 100=0.36% 00:30:30.844 cpu : usr=71.61%, sys=24.10%, ctx=26, majf=0, minf=55 00:30:30.844 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:30:30.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:30.844 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:30.844 issued rwts: total=15416,8177,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:30.844 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:30.844 00:30:30.844 Run status group 0 (all jobs): 00:30:30.844 READ: bw=120MiB/s (126MB/s), 120MiB/s-120MiB/s (126MB/s-126MB/s), io=241MiB (253MB), run=2008-2008msec 00:30:30.844 WRITE: bw=72.9MiB/s (76.5MB/s), 72.9MiB/s-72.9MiB/s (76.5MB/s-76.5MB/s), io=128MiB (134MB), run=1752-1752msec 00:30:30.844 09:40:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:31.101 09:40:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:30:31.101 09:40:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:30:31.101 09:40:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:30:31.101 09:40:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:30:31.101 09:40:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:30:31.101 09:40:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:31.101 09:40:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:31.101 09:40:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:30:31.101 09:40:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:30:31.101 09:40:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:30:31.101 09:40:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:30:34.378 Nvme0n1 00:30:34.378 09:40:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:30:37.655 09:40:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=120aaf21-89f2-47d1-a127-1d706127961a 00:30:37.655 09:40:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 120aaf21-89f2-47d1-a127-1d706127961a 00:30:37.655 09:40:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=120aaf21-89f2-47d1-a127-1d706127961a 00:30:37.655 09:40:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:37.655 09:40:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:30:37.655 09:40:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:30:37.655 09:40:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:37.655 09:40:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:37.655 { 00:30:37.655 "uuid": "120aaf21-89f2-47d1-a127-1d706127961a", 00:30:37.655 "name": "lvs_0", 00:30:37.655 "base_bdev": "Nvme0n1", 00:30:37.655 "total_data_clusters": 930, 00:30:37.655 "free_clusters": 930, 00:30:37.655 "block_size": 512, 00:30:37.655 "cluster_size": 1073741824 00:30:37.655 } 00:30:37.655 ]' 00:30:37.655 09:40:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="120aaf21-89f2-47d1-a127-1d706127961a") .free_clusters' 00:30:37.655 09:40:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:30:37.655 09:40:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="120aaf21-89f2-47d1-a127-1d706127961a") .cluster_size' 00:30:37.655 09:40:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:30:37.655 09:40:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:30:37.655 09:40:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:30:37.655 952320 00:30:37.655 09:40:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:30:37.913 60171b57-2a6a-4b92-b408-eb92ac5d5679 00:30:37.913 09:40:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:30:38.171 09:40:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:30:38.429 09:40:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:38.687 09:40:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:38.687 09:40:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:38.687 09:40:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:38.687 09:40:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:38.687 09:40:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:38.687 09:40:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:38.687 09:40:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:30:38.687 09:40:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:38.687 09:40:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:38.687 09:40:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:38.687 09:40:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:30:38.687 09:40:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:38.687 09:40:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:38.687 09:40:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:38.687 09:40:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:38.687 09:40:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:38.687 09:40:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:38.687 09:40:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:38.687 09:40:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:38.687 09:40:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:38.687 09:40:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:38.687 09:40:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:38.687 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:38.687 fio-3.35 00:30:38.687 Starting 1 thread 00:30:38.944 EAL: No free 2048 kB hugepages reported on node 1 00:30:41.467 00:30:41.467 test: (groupid=0, jobs=1): err= 0: pid=852250: Sun Jul 14 09:40:25 2024 00:30:41.467 read: IOPS=6015, BW=23.5MiB/s (24.6MB/s)(47.2MiB/2007msec) 00:30:41.467 slat (usec): min=2, max=152, avg= 2.71, stdev= 2.06 00:30:41.467 clat (usec): min=1120, max=171355, avg=11722.79, stdev=11638.30 00:30:41.467 lat (usec): min=1123, max=171392, avg=11725.50, stdev=11638.56 00:30:41.468 clat percentiles (msec): 00:30:41.468 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:30:41.468 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 12], 00:30:41.468 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 12], 95.00th=[ 13], 00:30:41.468 | 99.00th=[ 14], 99.50th=[ 159], 99.90th=[ 171], 99.95th=[ 171], 00:30:41.468 | 99.99th=[ 171] 00:30:41.468 bw ( KiB/s): min=16672, max=26592, per=99.79%, avg=24014.00, stdev=4895.93, samples=4 00:30:41.468 iops : min= 4168, max= 6648, avg=6003.50, stdev=1223.98, samples=4 00:30:41.468 write: IOPS=6003, BW=23.5MiB/s (24.6MB/s)(47.1MiB/2007msec); 0 zone resets 00:30:41.468 slat (usec): min=2, max=101, avg= 2.81, stdev= 1.50 00:30:41.468 clat (usec): min=382, max=169784, avg=9378.87, stdev=10943.73 00:30:41.468 lat (usec): min=384, max=169789, avg=9381.67, stdev=10943.95 00:30:41.468 clat percentiles (msec): 00:30:41.468 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:30:41.468 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:30:41.468 | 70.00th=[ 9], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 10], 00:30:41.468 | 99.00th=[ 11], 99.50th=[ 15], 99.90th=[ 169], 99.95th=[ 169], 00:30:41.468 | 99.99th=[ 169] 00:30:41.468 bw ( KiB/s): min=17704, max=26176, per=99.85%, avg=23978.00, stdev=4184.08, samples=4 00:30:41.468 iops : min= 4426, max= 6544, avg=5994.50, stdev=1046.02, samples=4 00:30:41.468 lat (usec) : 500=0.01%, 750=0.01% 00:30:41.468 lat (msec) : 2=0.02%, 4=0.14%, 10=55.30%, 20=43.99%, 250=0.53% 00:30:41.468 cpu : usr=55.38%, sys=39.03%, ctx=76, majf=0, minf=32 00:30:41.468 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:41.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.468 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:41.468 issued rwts: total=12074,12049,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:41.468 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:41.468 00:30:41.468 Run status group 0 (all jobs): 00:30:41.468 READ: bw=23.5MiB/s (24.6MB/s), 23.5MiB/s-23.5MiB/s (24.6MB/s-24.6MB/s), io=47.2MiB (49.5MB), run=2007-2007msec 00:30:41.468 WRITE: bw=23.5MiB/s (24.6MB/s), 23.5MiB/s-23.5MiB/s (24.6MB/s-24.6MB/s), io=47.1MiB (49.4MB), run=2007-2007msec 00:30:41.468 09:40:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:41.468 09:40:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:30:42.399 09:40:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=516df7b0-4e1c-4e57-81de-c39f6f69bf54 00:30:42.399 09:40:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 516df7b0-4e1c-4e57-81de-c39f6f69bf54 00:30:42.399 09:40:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=516df7b0-4e1c-4e57-81de-c39f6f69bf54 00:30:42.399 09:40:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:42.399 09:40:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:30:42.399 09:40:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:30:42.399 09:40:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:42.656 09:40:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:42.656 { 00:30:42.656 "uuid": "120aaf21-89f2-47d1-a127-1d706127961a", 00:30:42.656 "name": "lvs_0", 00:30:42.656 "base_bdev": "Nvme0n1", 00:30:42.656 "total_data_clusters": 930, 00:30:42.656 "free_clusters": 0, 00:30:42.656 "block_size": 512, 00:30:42.656 "cluster_size": 1073741824 00:30:42.656 }, 00:30:42.656 { 00:30:42.656 "uuid": "516df7b0-4e1c-4e57-81de-c39f6f69bf54", 00:30:42.656 "name": "lvs_n_0", 00:30:42.656 "base_bdev": "60171b57-2a6a-4b92-b408-eb92ac5d5679", 00:30:42.656 "total_data_clusters": 237847, 00:30:42.656 "free_clusters": 237847, 00:30:42.656 "block_size": 512, 00:30:42.656 "cluster_size": 4194304 00:30:42.656 } 00:30:42.656 ]' 00:30:42.656 09:40:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="516df7b0-4e1c-4e57-81de-c39f6f69bf54") .free_clusters' 00:30:42.914 09:40:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:30:42.914 09:40:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="516df7b0-4e1c-4e57-81de-c39f6f69bf54") .cluster_size' 00:30:42.914 09:40:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:30:42.914 09:40:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:30:42.914 09:40:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:30:42.914 951388 00:30:42.914 09:40:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:30:43.480 4cb7072d-3b5d-4247-ab61-9ead304ba205 00:30:43.480 09:40:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:30:43.737 09:40:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:30:43.995 09:40:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:30:44.253 09:40:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:44.253 09:40:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:44.253 09:40:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:44.253 09:40:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:44.253 09:40:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:44.253 09:40:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:44.253 09:40:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:30:44.253 09:40:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:44.253 09:40:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:44.253 09:40:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:44.253 09:40:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:30:44.253 09:40:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:44.253 09:40:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:44.253 09:40:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:44.253 09:40:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:44.253 09:40:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:44.253 09:40:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:44.253 09:40:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:44.253 09:40:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:44.253 09:40:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:44.253 09:40:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:44.253 09:40:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:44.511 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:44.511 fio-3.35 00:30:44.511 Starting 1 thread 00:30:44.511 EAL: No free 2048 kB hugepages reported on node 1 00:30:47.044 00:30:47.044 test: (groupid=0, jobs=1): err= 0: pid=852979: Sun Jul 14 09:40:31 2024 00:30:47.044 read: IOPS=5713, BW=22.3MiB/s (23.4MB/s)(44.9MiB/2010msec) 00:30:47.044 slat (usec): min=2, max=139, avg= 2.83, stdev= 2.18 00:30:47.044 clat (usec): min=4757, max=21007, avg=12420.84, stdev=1036.55 00:30:47.044 lat (usec): min=4762, max=21010, avg=12423.67, stdev=1036.44 00:30:47.044 clat percentiles (usec): 00:30:47.044 | 1.00th=[10028], 5.00th=[10814], 10.00th=[11207], 20.00th=[11600], 00:30:47.044 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12387], 60.00th=[12649], 00:30:47.044 | 70.00th=[12911], 80.00th=[13304], 90.00th=[13698], 95.00th=[13960], 00:30:47.044 | 99.00th=[14746], 99.50th=[15139], 99.90th=[18744], 99.95th=[20055], 00:30:47.044 | 99.99th=[20841] 00:30:47.044 bw ( KiB/s): min=21456, max=23696, per=99.97%, avg=22848.00, stdev=969.11, samples=4 00:30:47.044 iops : min= 5364, max= 5924, avg=5712.00, stdev=242.28, samples=4 00:30:47.044 write: IOPS=5702, BW=22.3MiB/s (23.4MB/s)(44.8MiB/2010msec); 0 zone resets 00:30:47.044 slat (usec): min=2, max=116, avg= 2.98, stdev= 1.92 00:30:47.044 clat (usec): min=2296, max=18707, avg=9862.56, stdev=956.40 00:30:47.044 lat (usec): min=2303, max=18710, avg=9865.55, stdev=956.35 00:30:47.044 clat percentiles (usec): 00:30:47.044 | 1.00th=[ 7701], 5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[ 9110], 00:30:47.044 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10028], 00:30:47.044 | 70.00th=[10290], 80.00th=[10552], 90.00th=[10945], 95.00th=[11338], 00:30:47.044 | 99.00th=[11994], 99.50th=[12518], 99.90th=[17171], 99.95th=[18482], 00:30:47.045 | 99.99th=[18744] 00:30:47.045 bw ( KiB/s): min=22336, max=23168, per=99.90%, avg=22790.00, stdev=440.87, samples=4 00:30:47.045 iops : min= 5584, max= 5792, avg=5697.50, stdev=110.22, samples=4 00:30:47.045 lat (msec) : 4=0.05%, 10=28.96%, 20=70.96%, 50=0.03% 00:30:47.045 cpu : usr=54.60%, sys=40.52%, ctx=77, majf=0, minf=32 00:30:47.045 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:47.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:47.045 issued rwts: total=11484,11463,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:47.045 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:47.045 00:30:47.045 Run status group 0 (all jobs): 00:30:47.045 READ: bw=22.3MiB/s (23.4MB/s), 22.3MiB/s-22.3MiB/s (23.4MB/s-23.4MB/s), io=44.9MiB (47.0MB), run=2010-2010msec 00:30:47.045 WRITE: bw=22.3MiB/s (23.4MB/s), 22.3MiB/s-22.3MiB/s (23.4MB/s-23.4MB/s), io=44.8MiB (47.0MB), run=2010-2010msec 00:30:47.045 09:40:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:30:47.045 09:40:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:30:47.045 09:40:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:30:51.219 09:40:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:51.219 09:40:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:30:54.497 09:40:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:54.497 09:40:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:30:56.396 09:40:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:56.396 09:40:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:30:56.396 09:40:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:30:56.396 09:40:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:56.396 09:40:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:30:56.396 09:40:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:56.396 09:40:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:30:56.396 09:40:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:56.396 09:40:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:56.396 rmmod nvme_tcp 00:30:56.396 rmmod nvme_fabrics 00:30:56.396 rmmod nvme_keyring 00:30:56.396 09:40:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:56.396 09:40:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:30:56.396 09:40:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:30:56.396 09:40:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 850164 ']' 00:30:56.396 09:40:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 850164 00:30:56.396 09:40:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 850164 ']' 00:30:56.396 09:40:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 850164 00:30:56.396 09:40:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:30:56.396 09:40:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:56.396 09:40:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 850164 00:30:56.396 09:40:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:56.396 09:40:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:56.396 09:40:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 850164' 00:30:56.396 killing process with pid 850164 00:30:56.396 09:40:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 850164 00:30:56.396 09:40:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 850164 00:30:56.655 09:40:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:56.655 09:40:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:56.655 09:40:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:56.655 09:40:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:56.655 09:40:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:56.655 09:40:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:56.655 09:40:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:56.655 09:40:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:58.561 09:40:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:58.561 00:30:58.561 real 0m37.088s 00:30:58.561 user 2m20.758s 00:30:58.561 sys 0m7.594s 00:30:58.561 09:40:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:58.561 09:40:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.561 ************************************ 00:30:58.561 END TEST nvmf_fio_host 00:30:58.561 ************************************ 00:30:58.561 09:40:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:58.561 09:40:42 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:58.561 09:40:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:58.561 09:40:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:58.561 09:40:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:58.561 ************************************ 00:30:58.561 START TEST nvmf_failover 00:30:58.561 ************************************ 00:30:58.561 09:40:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:58.561 * Looking for test storage... 00:30:58.561 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:58.561 09:40:43 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:58.820 09:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:30:58.820 09:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:58.820 09:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:58.820 09:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:58.820 09:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:58.820 09:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:58.820 09:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:58.820 09:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:58.820 09:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:58.820 09:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:58.820 09:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:58.820 09:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:58.820 09:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:58.820 09:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:58.820 09:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:58.820 09:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:58.820 09:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:58.820 09:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:58.820 09:40:43 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:58.820 09:40:43 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:58.820 09:40:43 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:58.820 09:40:43 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.820 09:40:43 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.820 09:40:43 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.820 09:40:43 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:30:58.820 09:40:43 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.820 09:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:30:58.820 09:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:58.820 09:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:58.820 09:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:58.820 09:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:58.820 09:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:58.820 09:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:58.820 09:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:58.820 09:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:58.820 09:40:43 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:58.820 09:40:43 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:58.820 09:40:43 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:58.820 09:40:43 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:58.820 09:40:43 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:30:58.820 09:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:58.820 09:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:58.820 09:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:58.820 09:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:58.821 09:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:58.821 09:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:58.821 09:40:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:58.821 09:40:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:58.821 09:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:58.821 09:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:58.821 09:40:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:30:58.821 09:40:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:00.718 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:00.718 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:31:00.718 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:00.718 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:00.718 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:00.718 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:00.718 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:00.718 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:31:00.718 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:00.718 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:00.719 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:00.719 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:00.719 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:00.719 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:00.719 09:40:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:00.719 09:40:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:00.719 09:40:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:00.719 09:40:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:00.719 09:40:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:00.719 09:40:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:00.719 09:40:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:00.719 09:40:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:00.719 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:00.719 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:31:00.719 00:31:00.719 --- 10.0.0.2 ping statistics --- 00:31:00.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:00.719 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:31:00.719 09:40:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:00.719 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:00.719 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:31:00.719 00:31:00.719 --- 10.0.0.1 ping statistics --- 00:31:00.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:00.719 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:31:00.719 09:40:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:00.719 09:40:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:31:00.719 09:40:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:00.719 09:40:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:00.719 09:40:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:00.719 09:40:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:00.719 09:40:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:00.719 09:40:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:00.719 09:40:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:00.719 09:40:45 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:31:00.719 09:40:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:00.719 09:40:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:00.719 09:40:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:00.719 09:40:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=856224 00:31:00.719 09:40:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:00.719 09:40:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 856224 00:31:00.719 09:40:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 856224 ']' 00:31:00.719 09:40:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:00.719 09:40:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:00.719 09:40:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:00.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:00.719 09:40:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:00.719 09:40:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:00.977 [2024-07-14 09:40:45.183230] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:31:00.977 [2024-07-14 09:40:45.183309] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:00.977 EAL: No free 2048 kB hugepages reported on node 1 00:31:00.977 [2024-07-14 09:40:45.247314] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:00.977 [2024-07-14 09:40:45.335105] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:00.977 [2024-07-14 09:40:45.335192] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:00.977 [2024-07-14 09:40:45.335206] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:00.977 [2024-07-14 09:40:45.335218] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:00.977 [2024-07-14 09:40:45.335227] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:00.977 [2024-07-14 09:40:45.335319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:00.977 [2024-07-14 09:40:45.335386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:00.977 [2024-07-14 09:40:45.335388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:01.234 09:40:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:01.234 09:40:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:31:01.234 09:40:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:01.234 09:40:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:01.234 09:40:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:01.234 09:40:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:01.234 09:40:45 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:01.491 [2024-07-14 09:40:45.749471] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:01.491 09:40:45 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:01.749 Malloc0 00:31:01.749 09:40:46 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:02.006 09:40:46 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:02.264 09:40:46 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:02.521 [2024-07-14 09:40:46.851804] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:02.521 09:40:46 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:02.779 [2024-07-14 09:40:47.092492] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:02.779 09:40:47 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:03.037 [2024-07-14 09:40:47.329339] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:03.037 09:40:47 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=856516 00:31:03.037 09:40:47 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:31:03.037 09:40:47 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:03.037 09:40:47 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 856516 /var/tmp/bdevperf.sock 00:31:03.037 09:40:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 856516 ']' 00:31:03.037 09:40:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:03.037 09:40:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:03.037 09:40:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:03.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:03.037 09:40:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:03.037 09:40:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:03.296 09:40:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:03.296 09:40:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:31:03.296 09:40:47 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:03.558 NVMe0n1 00:31:03.558 09:40:47 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:03.831 00:31:04.089 09:40:48 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=856646 00:31:04.089 09:40:48 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:04.089 09:40:48 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:31:05.023 09:40:49 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:05.282 [2024-07-14 09:40:49.514278] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c31270 is same with the state(5) to be set 00:31:05.282 [2024-07-14 09:40:49.514373] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c31270 is same with the state(5) to be set 00:31:05.282 [2024-07-14 09:40:49.514390] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c31270 is same with the state(5) to be set 00:31:05.282 [2024-07-14 09:40:49.514403] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c31270 is same with the state(5) to be set 00:31:05.282 [2024-07-14 09:40:49.514415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c31270 is same with the state(5) to be set 00:31:05.282 [2024-07-14 09:40:49.514427] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c31270 is same with the state(5) to be set 00:31:05.282 09:40:49 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:31:08.566 09:40:52 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:08.566 00:31:08.566 09:40:52 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:08.823 09:40:53 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:31:12.099 09:40:56 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:12.099 [2024-07-14 09:40:56.370468] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:12.099 09:40:56 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:31:13.033 09:40:57 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:13.292 [2024-07-14 09:40:57.670077] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670153] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670169] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670197] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670220] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670231] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670243] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670254] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670289] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670313] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670324] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670336] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670370] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670381] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670392] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670404] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670438] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670460] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670481] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670493] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670504] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670515] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670537] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670550] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670561] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670573] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670584] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670594] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670605] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670617] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670628] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670653] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670664] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670676] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670687] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670698] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670710] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670720] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670731] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670742] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670752] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670763] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670773] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670784] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670798] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 [2024-07-14 09:40:57.670809] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33680 is same with the state(5) to be set 00:31:13.292 09:40:57 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 856646 00:31:19.848 0 00:31:19.848 09:41:03 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 856516 00:31:19.848 09:41:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 856516 ']' 00:31:19.848 09:41:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 856516 00:31:19.848 09:41:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:31:19.848 09:41:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:19.848 09:41:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 856516 00:31:19.848 09:41:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:19.848 09:41:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:19.848 09:41:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 856516' 00:31:19.848 killing process with pid 856516 00:31:19.848 09:41:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 856516 00:31:19.848 09:41:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 856516 00:31:19.848 09:41:03 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:19.848 [2024-07-14 09:40:47.392373] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:31:19.848 [2024-07-14 09:40:47.392462] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid856516 ] 00:31:19.848 EAL: No free 2048 kB hugepages reported on node 1 00:31:19.848 [2024-07-14 09:40:47.453101] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:19.848 [2024-07-14 09:40:47.540996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:19.848 Running I/O for 15 seconds... 00:31:19.848 [2024-07-14 09:40:49.514770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.848 [2024-07-14 09:40:49.514818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.848 [2024-07-14 09:40:49.514857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.848 [2024-07-14 09:40:49.514906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.848 [2024-07-14 09:40:49.514933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.848 [2024-07-14 09:40:49.514958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.848 [2024-07-14 09:40:49.514988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.848 [2024-07-14 09:40:49.515026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.848 [2024-07-14 09:40:49.515054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.848 [2024-07-14 09:40:49.515078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.848 [2024-07-14 09:40:49.515105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.848 [2024-07-14 09:40:49.515128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.848 [2024-07-14 09:40:49.515155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.848 [2024-07-14 09:40:49.515193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.848 [2024-07-14 09:40:49.515219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.848 [2024-07-14 09:40:49.515242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.848 [2024-07-14 09:40:49.515268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.848 [2024-07-14 09:40:49.515292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.848 [2024-07-14 09:40:49.515317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.848 [2024-07-14 09:40:49.515340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.848 [2024-07-14 09:40:49.515364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.848 [2024-07-14 09:40:49.515387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.848 [2024-07-14 09:40:49.515422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.848 [2024-07-14 09:40:49.515446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.848 [2024-07-14 09:40:49.515472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.848 [2024-07-14 09:40:49.515494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.848 [2024-07-14 09:40:49.515520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.848 [2024-07-14 09:40:49.515542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.848 [2024-07-14 09:40:49.515567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.848 [2024-07-14 09:40:49.515590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.848 [2024-07-14 09:40:49.515614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.848 [2024-07-14 09:40:49.515638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.848 [2024-07-14 09:40:49.515662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.848 [2024-07-14 09:40:49.515686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.848 [2024-07-14 09:40:49.515710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.849 [2024-07-14 09:40:49.515734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.849 [2024-07-14 09:40:49.515758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.849 [2024-07-14 09:40:49.515780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.849 [2024-07-14 09:40:49.515804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.849 [2024-07-14 09:40:49.515828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.849 [2024-07-14 09:40:49.515855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.849 [2024-07-14 09:40:49.515903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.849 [2024-07-14 09:40:49.515929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.849 [2024-07-14 09:40:49.515952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.849 [2024-07-14 09:40:49.515980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.849 [2024-07-14 09:40:49.516003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.849 [2024-07-14 09:40:49.516029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.849 [2024-07-14 09:40:49.516058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.849 [2024-07-14 09:40:49.516085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.849 [2024-07-14 09:40:49.516108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.849 [2024-07-14 09:40:49.516134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.849 [2024-07-14 09:40:49.516157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.849 [2024-07-14 09:40:49.516197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:78632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.849 [2024-07-14 09:40:49.516219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.849 [2024-07-14 09:40:49.516246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.849 [2024-07-14 09:40:49.516269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.849 [2024-07-14 09:40:49.516297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:78648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.849 [2024-07-14 09:40:49.516319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.849 [2024-07-14 09:40:49.516344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.849 [2024-07-14 09:40:49.516366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.849 [2024-07-14 09:40:49.516392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.849 [2024-07-14 09:40:49.516429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.849 [2024-07-14 09:40:49.516456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.849 [2024-07-14 09:40:49.516479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.849 [2024-07-14 09:40:49.516506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.849 [2024-07-14 09:40:49.516529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.849 [2024-07-14 09:40:49.516555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.849 [2024-07-14 09:40:49.516578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.849 [2024-07-14 09:40:49.516604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.849 [2024-07-14 09:40:49.516628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.849 [2024-07-14 09:40:49.516653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.849 [2024-07-14 09:40:49.516678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.849 [2024-07-14 09:40:49.516715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.849 [2024-07-14 09:40:49.516738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.849 [2024-07-14 09:40:49.516764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.849 [2024-07-14 09:40:49.516787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.849 [2024-07-14 09:40:49.516813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.849 [2024-07-14 09:40:49.516838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.849 [2024-07-14 09:40:49.516884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.849 [2024-07-14 09:40:49.516912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.849 [2024-07-14 09:40:49.516938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.849 [2024-07-14 09:40:49.516963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.849 [2024-07-14 09:40:49.516989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.849 [2024-07-14 09:40:49.517014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.849 [2024-07-14 09:40:49.517041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.849 [2024-07-14 09:40:49.517068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.849 [2024-07-14 09:40:49.517094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.849 [2024-07-14 09:40:49.517119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.849 [2024-07-14 09:40:49.517145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.849 [2024-07-14 09:40:49.517184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.849 [2024-07-14 09:40:49.517208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.849 [2024-07-14 09:40:49.517233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.849 [2024-07-14 09:40:49.517259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.849 [2024-07-14 09:40:49.517282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.849 [2024-07-14 09:40:49.517308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.849 [2024-07-14 09:40:49.517331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.849 [2024-07-14 09:40:49.517357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.849 [2024-07-14 09:40:49.517386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.849 [2024-07-14 09:40:49.517413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.849 [2024-07-14 09:40:49.517435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.849 [2024-07-14 09:40:49.517462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.849 [2024-07-14 09:40:49.517486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.849 [2024-07-14 09:40:49.517512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.849 [2024-07-14 09:40:49.517534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.849 [2024-07-14 09:40:49.517561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.849 [2024-07-14 09:40:49.517584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.849 [2024-07-14 09:40:49.517610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.849 [2024-07-14 09:40:49.517634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.849 [2024-07-14 09:40:49.517659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.849 [2024-07-14 09:40:49.517682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.849 [2024-07-14 09:40:49.517708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.849 [2024-07-14 09:40:49.517732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.849 [2024-07-14 09:40:49.517758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:78720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.849 [2024-07-14 09:40:49.517782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.849 [2024-07-14 09:40:49.517806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.849 [2024-07-14 09:40:49.517831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.849 [2024-07-14 09:40:49.517855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.849 [2024-07-14 09:40:49.517904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.849 [2024-07-14 09:40:49.517931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.850 [2024-07-14 09:40:49.517956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.850 [2024-07-14 09:40:49.517981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.850 [2024-07-14 09:40:49.518006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.850 [2024-07-14 09:40:49.518031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.850 [2024-07-14 09:40:49.518062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.850 [2024-07-14 09:40:49.518088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.850 [2024-07-14 09:40:49.518113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.850 [2024-07-14 09:40:49.518138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:78776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.850 [2024-07-14 09:40:49.518164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.850 [2024-07-14 09:40:49.518205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.850 [2024-07-14 09:40:49.518230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.850 [2024-07-14 09:40:49.518254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.850 [2024-07-14 09:40:49.518279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.850 [2024-07-14 09:40:49.518303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.850 [2024-07-14 09:40:49.518327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.850 [2024-07-14 09:40:49.518352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.850 [2024-07-14 09:40:49.518376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.850 [2024-07-14 09:40:49.518401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.850 [2024-07-14 09:40:49.518425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.850 [2024-07-14 09:40:49.518454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.850 [2024-07-14 09:40:49.518479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.850 [2024-07-14 09:40:49.518504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:78824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.850 [2024-07-14 09:40:49.518528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.850 [2024-07-14 09:40:49.518553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.850 [2024-07-14 09:40:49.518577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.850 [2024-07-14 09:40:49.518604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.850 [2024-07-14 09:40:49.518627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.850 [2024-07-14 09:40:49.518655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.850 [2024-07-14 09:40:49.518679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.850 [2024-07-14 09:40:49.518710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.850 [2024-07-14 09:40:49.518735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.850 [2024-07-14 09:40:49.518761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.850 [2024-07-14 09:40:49.518784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.850 [2024-07-14 09:40:49.518810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.850 [2024-07-14 09:40:49.518833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.850 [2024-07-14 09:40:49.518859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.850 [2024-07-14 09:40:49.518908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.850 [2024-07-14 09:40:49.518937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.850 [2024-07-14 09:40:49.518961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.850 [2024-07-14 09:40:49.518989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.850 [2024-07-14 09:40:49.519014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.850 [2024-07-14 09:40:49.519043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.850 [2024-07-14 09:40:49.519067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.850 [2024-07-14 09:40:49.519095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.850 [2024-07-14 09:40:49.519119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.850 [2024-07-14 09:40:49.519146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.850 [2024-07-14 09:40:49.519171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.850 [2024-07-14 09:40:49.519212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.850 [2024-07-14 09:40:49.519235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.850 [2024-07-14 09:40:49.519263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:78936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.850 [2024-07-14 09:40:49.519287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.850 [2024-07-14 09:40:49.519314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:78944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.850 [2024-07-14 09:40:49.519338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.850 [2024-07-14 09:40:49.519365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.850 [2024-07-14 09:40:49.519395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.850 [2024-07-14 09:40:49.519423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:78960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.850 [2024-07-14 09:40:49.519446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.850 [2024-07-14 09:40:49.519475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:78968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.850 [2024-07-14 09:40:49.519498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.850 [2024-07-14 09:40:49.519525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.850 [2024-07-14 09:40:49.519548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.850 [2024-07-14 09:40:49.519576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.850 [2024-07-14 09:40:49.519599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.850 [2024-07-14 09:40:49.519627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.850 [2024-07-14 09:40:49.519651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.850 [2024-07-14 09:40:49.519678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:79000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.850 [2024-07-14 09:40:49.519703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.850 [2024-07-14 09:40:49.519730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:79008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.850 [2024-07-14 09:40:49.519753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.850 [2024-07-14 09:40:49.519781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:79016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.850 [2024-07-14 09:40:49.519804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.850 [2024-07-14 09:40:49.519832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:79024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.850 [2024-07-14 09:40:49.519855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.850 [2024-07-14 09:40:49.519906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:79032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.850 [2024-07-14 09:40:49.519931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.850 [2024-07-14 09:40:49.519960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:79040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.850 [2024-07-14 09:40:49.519984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.850 [2024-07-14 09:40:49.520012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:79048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.850 [2024-07-14 09:40:49.520036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.850 [2024-07-14 09:40:49.520068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:79056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.850 [2024-07-14 09:40:49.520093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.850 [2024-07-14 09:40:49.520120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:79064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.850 [2024-07-14 09:40:49.520144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.851 [2024-07-14 09:40:49.520172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:79072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.851 [2024-07-14 09:40:49.520209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.851 [2024-07-14 09:40:49.520236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.851 [2024-07-14 09:40:49.520259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.851 [2024-07-14 09:40:49.520284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:79088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.851 [2024-07-14 09:40:49.520307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.851 [2024-07-14 09:40:49.520333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:79096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.851 [2024-07-14 09:40:49.520356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.851 [2024-07-14 09:40:49.520380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:79104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.851 [2024-07-14 09:40:49.520403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.851 [2024-07-14 09:40:49.520428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.851 [2024-07-14 09:40:49.520452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.851 [2024-07-14 09:40:49.520476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:79120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.851 [2024-07-14 09:40:49.520501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.851 [2024-07-14 09:40:49.520525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:79128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.851 [2024-07-14 09:40:49.520550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.851 [2024-07-14 09:40:49.520575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:79136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.851 [2024-07-14 09:40:49.520600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.851 [2024-07-14 09:40:49.520625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:79144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.851 [2024-07-14 09:40:49.520649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.851 [2024-07-14 09:40:49.520674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:79152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.851 [2024-07-14 09:40:49.520704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.851 [2024-07-14 09:40:49.520731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.851 [2024-07-14 09:40:49.520755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.851 [2024-07-14 09:40:49.520781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:79168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.851 [2024-07-14 09:40:49.520805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.851 [2024-07-14 09:40:49.520831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:79176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.851 [2024-07-14 09:40:49.520855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.851 [2024-07-14 09:40:49.520903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:79184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.851 [2024-07-14 09:40:49.520929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.851 [2024-07-14 09:40:49.520955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.851 [2024-07-14 09:40:49.520981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.851 [2024-07-14 09:40:49.521008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:79200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.851 [2024-07-14 09:40:49.521032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.851 [2024-07-14 09:40:49.521059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:79208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.851 [2024-07-14 09:40:49.521083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.851 [2024-07-14 09:40:49.521111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:79216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.851 [2024-07-14 09:40:49.521134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.851 [2024-07-14 09:40:49.521162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:79224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.851 [2024-07-14 09:40:49.521199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.851 [2024-07-14 09:40:49.521226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.851 [2024-07-14 09:40:49.521249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.851 [2024-07-14 09:40:49.521276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.851 [2024-07-14 09:40:49.521298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.851 [2024-07-14 09:40:49.521325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.851 [2024-07-14 09:40:49.521347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.851 [2024-07-14 09:40:49.521374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.851 [2024-07-14 09:40:49.521403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.851 [2024-07-14 09:40:49.521433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.851 [2024-07-14 09:40:49.521457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.851 [2024-07-14 09:40:49.521484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.851 [2024-07-14 09:40:49.521507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.851 [2024-07-14 09:40:49.521552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.851 [2024-07-14 09:40:49.521576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.851 [2024-07-14 09:40:49.521596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79616 len:8 PRP1 0x0 PRP2 0x0 00:31:19.851 [2024-07-14 09:40:49.521618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.851 [2024-07-14 09:40:49.521699] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a29760 was disconnected and freed. reset controller. 00:31:19.851 [2024-07-14 09:40:49.521727] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:19.851 [2024-07-14 09:40:49.521788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.851 [2024-07-14 09:40:49.521816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.851 [2024-07-14 09:40:49.521841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.851 [2024-07-14 09:40:49.521874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.851 [2024-07-14 09:40:49.521901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.851 [2024-07-14 09:40:49.521935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.851 [2024-07-14 09:40:49.521960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.851 [2024-07-14 09:40:49.521985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.851 [2024-07-14 09:40:49.522007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:19.851 [2024-07-14 09:40:49.522088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19f5830 (9): Bad file descriptor 00:31:19.851 [2024-07-14 09:40:49.526253] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:19.851 [2024-07-14 09:40:49.559204] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:19.851 [2024-07-14 09:40:53.122663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:75832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.851 [2024-07-14 09:40:53.122743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.851 [2024-07-14 09:40:53.122781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:75840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.851 [2024-07-14 09:40:53.122806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.851 [2024-07-14 09:40:53.122841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:75392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.851 [2024-07-14 09:40:53.122896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.851 [2024-07-14 09:40:53.122921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.851 [2024-07-14 09:40:53.122944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.851 [2024-07-14 09:40:53.122969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:75408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.851 [2024-07-14 09:40:53.122992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.851 [2024-07-14 09:40:53.123017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:75416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.851 [2024-07-14 09:40:53.123040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.851 [2024-07-14 09:40:53.123065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:75424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.851 [2024-07-14 09:40:53.123088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.851 [2024-07-14 09:40:53.123114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:75432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.851 [2024-07-14 09:40:53.123137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.851 [2024-07-14 09:40:53.123171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:75440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.851 [2024-07-14 09:40:53.123208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.852 [2024-07-14 09:40:53.123231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:75448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.852 [2024-07-14 09:40:53.123254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.852 [2024-07-14 09:40:53.123277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:75456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.852 [2024-07-14 09:40:53.123297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.852 [2024-07-14 09:40:53.123320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:75464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.852 [2024-07-14 09:40:53.123341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.852 [2024-07-14 09:40:53.123367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:75472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.852 [2024-07-14 09:40:53.123388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.852 [2024-07-14 09:40:53.123415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:75480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.852 [2024-07-14 09:40:53.123438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.852 [2024-07-14 09:40:53.123463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:75488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.852 [2024-07-14 09:40:53.123492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.852 [2024-07-14 09:40:53.123518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:75496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.852 [2024-07-14 09:40:53.123542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.852 [2024-07-14 09:40:53.123566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:75504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.852 [2024-07-14 09:40:53.123591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.852 [2024-07-14 09:40:53.123616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:75512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.852 [2024-07-14 09:40:53.123639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.852 [2024-07-14 09:40:53.123665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:75520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.852 [2024-07-14 09:40:53.123687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.852 [2024-07-14 09:40:53.123712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:75528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.852 [2024-07-14 09:40:53.123735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.852 [2024-07-14 09:40:53.123759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:75536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.852 [2024-07-14 09:40:53.123781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.852 [2024-07-14 09:40:53.123806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:75544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.852 [2024-07-14 09:40:53.123828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.852 [2024-07-14 09:40:53.123887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:75552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.852 [2024-07-14 09:40:53.123913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.852 [2024-07-14 09:40:53.123941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:75560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.852 [2024-07-14 09:40:53.123964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.852 [2024-07-14 09:40:53.123992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:75568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.852 [2024-07-14 09:40:53.124015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.852 [2024-07-14 09:40:53.124042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.852 [2024-07-14 09:40:53.124066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.852 [2024-07-14 09:40:53.124092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:75584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.852 [2024-07-14 09:40:53.124115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.852 [2024-07-14 09:40:53.124146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:75592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.852 [2024-07-14 09:40:53.124187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.852 [2024-07-14 09:40:53.124212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:75600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.852 [2024-07-14 09:40:53.124235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.852 [2024-07-14 09:40:53.124261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.852 [2024-07-14 09:40:53.124283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.852 [2024-07-14 09:40:53.124309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:75616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.852 [2024-07-14 09:40:53.124332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.852 [2024-07-14 09:40:53.124358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:75624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.852 [2024-07-14 09:40:53.124381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.852 [2024-07-14 09:40:53.124405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:75632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.852 [2024-07-14 09:40:53.124428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.852 [2024-07-14 09:40:53.124452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:75640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.852 [2024-07-14 09:40:53.124475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.852 [2024-07-14 09:40:53.124499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:75648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.852 [2024-07-14 09:40:53.124523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.852 [2024-07-14 09:40:53.124546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:75656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.852 [2024-07-14 09:40:53.124571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.852 [2024-07-14 09:40:53.124596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:75664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.852 [2024-07-14 09:40:53.124620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.852 [2024-07-14 09:40:53.124644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:75672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.852 [2024-07-14 09:40:53.124668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.852 [2024-07-14 09:40:53.124692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:75680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.852 [2024-07-14 09:40:53.124717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.852 [2024-07-14 09:40:53.124742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:75688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.852 [2024-07-14 09:40:53.124770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.852 [2024-07-14 09:40:53.124795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:75696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.852 [2024-07-14 09:40:53.124820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.852 [2024-07-14 09:40:53.124843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:75704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.852 [2024-07-14 09:40:53.124889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.852 [2024-07-14 09:40:53.124927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:75712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.852 [2024-07-14 09:40:53.124952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.852 [2024-07-14 09:40:53.124977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.853 [2024-07-14 09:40:53.125001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.853 [2024-07-14 09:40:53.125026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:75728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.853 [2024-07-14 09:40:53.125051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.853 [2024-07-14 09:40:53.125076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.853 [2024-07-14 09:40:53.125101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.853 [2024-07-14 09:40:53.125125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:75744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.853 [2024-07-14 09:40:53.125149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.853 [2024-07-14 09:40:53.125196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:75752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.853 [2024-07-14 09:40:53.125218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.853 [2024-07-14 09:40:53.125259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:75760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.853 [2024-07-14 09:40:53.125282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.853 [2024-07-14 09:40:53.125308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:75768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.853 [2024-07-14 09:40:53.125331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.853 [2024-07-14 09:40:53.125358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:75776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.853 [2024-07-14 09:40:53.125381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.853 [2024-07-14 09:40:53.125408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.853 [2024-07-14 09:40:53.125430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.853 [2024-07-14 09:40:53.125457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.853 [2024-07-14 09:40:53.125500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.853 [2024-07-14 09:40:53.125527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:75864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.853 [2024-07-14 09:40:53.125551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.853 [2024-07-14 09:40:53.125595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:75872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.853 [2024-07-14 09:40:53.125619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.853 [2024-07-14 09:40:53.125647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:75880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.853 [2024-07-14 09:40:53.125669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.853 [2024-07-14 09:40:53.125696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:75888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.853 [2024-07-14 09:40:53.125719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.853 [2024-07-14 09:40:53.125746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:75896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.853 [2024-07-14 09:40:53.125768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.853 [2024-07-14 09:40:53.125795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.853 [2024-07-14 09:40:53.125818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.853 [2024-07-14 09:40:53.125860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.853 [2024-07-14 09:40:53.125892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.853 [2024-07-14 09:40:53.125927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.853 [2024-07-14 09:40:53.125950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.853 [2024-07-14 09:40:53.125978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.853 [2024-07-14 09:40:53.126003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.853 [2024-07-14 09:40:53.126031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.853 [2024-07-14 09:40:53.126054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.853 [2024-07-14 09:40:53.126081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.853 [2024-07-14 09:40:53.126105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.853 [2024-07-14 09:40:53.126133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.853 [2024-07-14 09:40:53.126157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.853 [2024-07-14 09:40:53.126211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.853 [2024-07-14 09:40:53.126235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.853 [2024-07-14 09:40:53.126260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.853 [2024-07-14 09:40:53.126285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.853 [2024-07-14 09:40:53.126309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.853 [2024-07-14 09:40:53.126333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.853 [2024-07-14 09:40:53.126357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:75984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.853 [2024-07-14 09:40:53.126381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.853 [2024-07-14 09:40:53.126406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.853 [2024-07-14 09:40:53.126433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.853 [2024-07-14 09:40:53.126459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.853 [2024-07-14 09:40:53.126484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.853 [2024-07-14 09:40:53.126509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.853 [2024-07-14 09:40:53.126534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.853 [2024-07-14 09:40:53.126560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.853 [2024-07-14 09:40:53.126584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.853 [2024-07-14 09:40:53.126609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.853 [2024-07-14 09:40:53.126633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.853 [2024-07-14 09:40:53.126658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.853 [2024-07-14 09:40:53.126682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.853 [2024-07-14 09:40:53.126708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.853 [2024-07-14 09:40:53.126731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.853 [2024-07-14 09:40:53.126756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.853 [2024-07-14 09:40:53.126779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.853 [2024-07-14 09:40:53.126805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.853 [2024-07-14 09:40:53.126840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.853 [2024-07-14 09:40:53.126891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.853 [2024-07-14 09:40:53.126924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.853 [2024-07-14 09:40:53.126950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.853 [2024-07-14 09:40:53.126983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.853 [2024-07-14 09:40:53.127010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.853 [2024-07-14 09:40:53.127033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.853 [2024-07-14 09:40:53.127060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.853 [2024-07-14 09:40:53.127083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.853 [2024-07-14 09:40:53.127110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.853 [2024-07-14 09:40:53.127133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.853 [2024-07-14 09:40:53.127159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.853 [2024-07-14 09:40:53.127183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.853 [2024-07-14 09:40:53.127233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.853 [2024-07-14 09:40:53.127264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.853 [2024-07-14 09:40:53.127292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.853 [2024-07-14 09:40:53.127315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.854 [2024-07-14 09:40:53.127342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.854 [2024-07-14 09:40:53.127364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.854 [2024-07-14 09:40:53.127391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.854 [2024-07-14 09:40:53.127414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.854 [2024-07-14 09:40:53.127441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.854 [2024-07-14 09:40:53.127464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.854 [2024-07-14 09:40:53.127490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.854 [2024-07-14 09:40:53.127514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.854 [2024-07-14 09:40:53.127547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.854 [2024-07-14 09:40:53.127570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.854 [2024-07-14 09:40:53.127596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.854 [2024-07-14 09:40:53.127621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.854 [2024-07-14 09:40:53.127646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.854 [2024-07-14 09:40:53.127672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.854 [2024-07-14 09:40:53.127697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.854 [2024-07-14 09:40:53.127722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.854 [2024-07-14 09:40:53.127747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.854 [2024-07-14 09:40:53.127771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.854 [2024-07-14 09:40:53.127795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.854 [2024-07-14 09:40:53.127819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.854 [2024-07-14 09:40:53.127844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.854 [2024-07-14 09:40:53.127890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.854 [2024-07-14 09:40:53.127922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.854 [2024-07-14 09:40:53.127949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.854 [2024-07-14 09:40:53.127974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.854 [2024-07-14 09:40:53.128000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.854 [2024-07-14 09:40:53.128043] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.854 [2024-07-14 09:40:53.128069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76232 len:8 PRP1 0x0 PRP2 0x0 00:31:19.854 [2024-07-14 09:40:53.128091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.854 [2024-07-14 09:40:53.128121] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.854 [2024-07-14 09:40:53.128141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.854 [2024-07-14 09:40:53.128172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76240 len:8 PRP1 0x0 PRP2 0x0 00:31:19.854 [2024-07-14 09:40:53.128207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.854 [2024-07-14 09:40:53.128231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.854 [2024-07-14 09:40:53.128250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.854 [2024-07-14 09:40:53.128274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76248 len:8 PRP1 0x0 PRP2 0x0 00:31:19.854 [2024-07-14 09:40:53.128296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.854 [2024-07-14 09:40:53.128317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.854 [2024-07-14 09:40:53.128337] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.854 [2024-07-14 09:40:53.128355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76256 len:8 PRP1 0x0 PRP2 0x0 00:31:19.854 [2024-07-14 09:40:53.128378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.854 [2024-07-14 09:40:53.128401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.854 [2024-07-14 09:40:53.128419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.854 [2024-07-14 09:40:53.128439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76264 len:8 PRP1 0x0 PRP2 0x0 00:31:19.854 [2024-07-14 09:40:53.128462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.854 [2024-07-14 09:40:53.128484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.854 [2024-07-14 09:40:53.128503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.854 [2024-07-14 09:40:53.128521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76272 len:8 PRP1 0x0 PRP2 0x0 00:31:19.854 [2024-07-14 09:40:53.128543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.854 [2024-07-14 09:40:53.128565] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.854 [2024-07-14 09:40:53.128584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.854 [2024-07-14 09:40:53.128603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76280 len:8 PRP1 0x0 PRP2 0x0 00:31:19.854 [2024-07-14 09:40:53.128623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.854 [2024-07-14 09:40:53.128647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.854 [2024-07-14 09:40:53.128665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.854 [2024-07-14 09:40:53.128684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76288 len:8 PRP1 0x0 PRP2 0x0 00:31:19.854 [2024-07-14 09:40:53.128706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.854 [2024-07-14 09:40:53.128727] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.854 [2024-07-14 09:40:53.128747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.854 [2024-07-14 09:40:53.128765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76296 len:8 PRP1 0x0 PRP2 0x0 00:31:19.854 [2024-07-14 09:40:53.128787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.854 [2024-07-14 09:40:53.128809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.854 [2024-07-14 09:40:53.128828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.854 [2024-07-14 09:40:53.128849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76304 len:8 PRP1 0x0 PRP2 0x0 00:31:19.854 [2024-07-14 09:40:53.128906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.854 [2024-07-14 09:40:53.128933] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.854 [2024-07-14 09:40:53.128958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.854 [2024-07-14 09:40:53.128980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76312 len:8 PRP1 0x0 PRP2 0x0 00:31:19.854 [2024-07-14 09:40:53.129002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.854 [2024-07-14 09:40:53.129025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.854 [2024-07-14 09:40:53.129044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.854 [2024-07-14 09:40:53.129064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76320 len:8 PRP1 0x0 PRP2 0x0 00:31:19.854 [2024-07-14 09:40:53.129088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.854 [2024-07-14 09:40:53.129111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.854 [2024-07-14 09:40:53.129131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.854 [2024-07-14 09:40:53.129151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76328 len:8 PRP1 0x0 PRP2 0x0 00:31:19.854 [2024-07-14 09:40:53.129195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.854 [2024-07-14 09:40:53.129219] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.854 [2024-07-14 09:40:53.129237] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.854 [2024-07-14 09:40:53.129257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76336 len:8 PRP1 0x0 PRP2 0x0 00:31:19.854 [2024-07-14 09:40:53.129278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.854 [2024-07-14 09:40:53.129301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.854 [2024-07-14 09:40:53.129322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.854 [2024-07-14 09:40:53.129341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76344 len:8 PRP1 0x0 PRP2 0x0 00:31:19.854 [2024-07-14 09:40:53.129363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.854 [2024-07-14 09:40:53.129386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.854 [2024-07-14 09:40:53.129405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.854 [2024-07-14 09:40:53.129424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76352 len:8 PRP1 0x0 PRP2 0x0 00:31:19.854 [2024-07-14 09:40:53.129444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.854 [2024-07-14 09:40:53.129468] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.854 [2024-07-14 09:40:53.129486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.854 [2024-07-14 09:40:53.129505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76360 len:8 PRP1 0x0 PRP2 0x0 00:31:19.854 [2024-07-14 09:40:53.129527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.854 [2024-07-14 09:40:53.129548] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.854 [2024-07-14 09:40:53.129568] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.854 [2024-07-14 09:40:53.129587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76368 len:8 PRP1 0x0 PRP2 0x0 00:31:19.854 [2024-07-14 09:40:53.129609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.855 [2024-07-14 09:40:53.129637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.855 [2024-07-14 09:40:53.129656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.855 [2024-07-14 09:40:53.129677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76376 len:8 PRP1 0x0 PRP2 0x0 00:31:19.855 [2024-07-14 09:40:53.129698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.855 [2024-07-14 09:40:53.129722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.855 [2024-07-14 09:40:53.129740] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.855 [2024-07-14 09:40:53.129759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76384 len:8 PRP1 0x0 PRP2 0x0 00:31:19.855 [2024-07-14 09:40:53.129781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.855 [2024-07-14 09:40:53.129802] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.855 [2024-07-14 09:40:53.129823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.855 [2024-07-14 09:40:53.129841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76392 len:8 PRP1 0x0 PRP2 0x0 00:31:19.855 [2024-07-14 09:40:53.129886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.855 [2024-07-14 09:40:53.129919] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.855 [2024-07-14 09:40:53.129938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.855 [2024-07-14 09:40:53.129961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76400 len:8 PRP1 0x0 PRP2 0x0 00:31:19.855 [2024-07-14 09:40:53.129984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.855 [2024-07-14 09:40:53.130009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.855 [2024-07-14 09:40:53.130028] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.855 [2024-07-14 09:40:53.130047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76408 len:8 PRP1 0x0 PRP2 0x0 00:31:19.855 [2024-07-14 09:40:53.130070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.855 [2024-07-14 09:40:53.130093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.855 [2024-07-14 09:40:53.130114] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.855 [2024-07-14 09:40:53.130133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75784 len:8 PRP1 0x0 PRP2 0x0 00:31:19.855 [2024-07-14 09:40:53.130156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.855 [2024-07-14 09:40:53.130204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.855 [2024-07-14 09:40:53.130222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.855 [2024-07-14 09:40:53.130242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75792 len:8 PRP1 0x0 PRP2 0x0 00:31:19.855 [2024-07-14 09:40:53.130264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.855 [2024-07-14 09:40:53.130289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.855 [2024-07-14 09:40:53.130308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.855 [2024-07-14 09:40:53.130327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75800 len:8 PRP1 0x0 PRP2 0x0 00:31:19.855 [2024-07-14 09:40:53.130354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.855 [2024-07-14 09:40:53.130378] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.855 [2024-07-14 09:40:53.130398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.855 [2024-07-14 09:40:53.130417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75808 len:8 PRP1 0x0 PRP2 0x0 00:31:19.855 [2024-07-14 09:40:53.130441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.855 [2024-07-14 09:40:53.130464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.855 [2024-07-14 09:40:53.130484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.855 [2024-07-14 09:40:53.130504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75816 len:8 PRP1 0x0 PRP2 0x0 00:31:19.855 [2024-07-14 09:40:53.130524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.855 [2024-07-14 09:40:53.130548] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.855 [2024-07-14 09:40:53.130567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.855 [2024-07-14 09:40:53.130596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75824 len:8 PRP1 0x0 PRP2 0x0 00:31:19.855 [2024-07-14 09:40:53.130618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.855 [2024-07-14 09:40:53.130703] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a25300 was disconnected and freed. reset controller. 00:31:19.855 [2024-07-14 09:40:53.130732] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:31:19.855 [2024-07-14 09:40:53.130792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.855 [2024-07-14 09:40:53.130820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.855 [2024-07-14 09:40:53.130846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.855 [2024-07-14 09:40:53.130878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.855 [2024-07-14 09:40:53.130905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.855 [2024-07-14 09:40:53.130936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.855 [2024-07-14 09:40:53.130962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.855 [2024-07-14 09:40:53.130984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.855 [2024-07-14 09:40:53.131007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:19.855 [2024-07-14 09:40:53.131078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19f5830 (9): Bad file descriptor 00:31:19.855 [2024-07-14 09:40:53.135171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:19.855 [2024-07-14 09:40:53.295766] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:19.855 [2024-07-14 09:40:57.672982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:41624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.855 [2024-07-14 09:40:57.673035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.855 [2024-07-14 09:40:57.673087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.855 [2024-07-14 09:40:57.673115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.855 [2024-07-14 09:40:57.673144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.855 [2024-07-14 09:40:57.673183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.855 [2024-07-14 09:40:57.673210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.855 [2024-07-14 09:40:57.673248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.855 [2024-07-14 09:40:57.673273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:41656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.855 [2024-07-14 09:40:57.673297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.855 [2024-07-14 09:40:57.673322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.855 [2024-07-14 09:40:57.673346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.855 [2024-07-14 09:40:57.673371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:41672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.855 [2024-07-14 09:40:57.673395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.855 [2024-07-14 09:40:57.673419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.855 [2024-07-14 09:40:57.673442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.855 [2024-07-14 09:40:57.673468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:41688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.855 [2024-07-14 09:40:57.673491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.855 [2024-07-14 09:40:57.673516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:41696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.855 [2024-07-14 09:40:57.673539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.855 [2024-07-14 09:40:57.673564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:41704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.855 [2024-07-14 09:40:57.673601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.855 [2024-07-14 09:40:57.673628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:41712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.855 [2024-07-14 09:40:57.673652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.855 [2024-07-14 09:40:57.673680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:41720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.855 [2024-07-14 09:40:57.673703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.855 [2024-07-14 09:40:57.673729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:41728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.855 [2024-07-14 09:40:57.673759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.855 [2024-07-14 09:40:57.673786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:41736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.855 [2024-07-14 09:40:57.673810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.855 [2024-07-14 09:40:57.673835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:41744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.855 [2024-07-14 09:40:57.673859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.855 [2024-07-14 09:40:57.673924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:41752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.855 [2024-07-14 09:40:57.673948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.855 [2024-07-14 09:40:57.673977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:41760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.856 [2024-07-14 09:40:57.674001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.856 [2024-07-14 09:40:57.674028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:41768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.856 [2024-07-14 09:40:57.674053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.856 [2024-07-14 09:40:57.674080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.856 [2024-07-14 09:40:57.674105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.856 [2024-07-14 09:40:57.674131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:41784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.856 [2024-07-14 09:40:57.674154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.856 [2024-07-14 09:40:57.674198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:41792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.856 [2024-07-14 09:40:57.674221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.856 [2024-07-14 09:40:57.674249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:41800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.856 [2024-07-14 09:40:57.674273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.856 [2024-07-14 09:40:57.674300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:41808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.856 [2024-07-14 09:40:57.674323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.856 [2024-07-14 09:40:57.674351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:41816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.856 [2024-07-14 09:40:57.674374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.856 [2024-07-14 09:40:57.674401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:41824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.856 [2024-07-14 09:40:57.674424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.856 [2024-07-14 09:40:57.674457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.856 [2024-07-14 09:40:57.674481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.856 [2024-07-14 09:40:57.674508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.856 [2024-07-14 09:40:57.674532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.856 [2024-07-14 09:40:57.674559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.856 [2024-07-14 09:40:57.674582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.856 [2024-07-14 09:40:57.674609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.856 [2024-07-14 09:40:57.674631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.856 [2024-07-14 09:40:57.674658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:41864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.856 [2024-07-14 09:40:57.674681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.856 [2024-07-14 09:40:57.674708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:41872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.856 [2024-07-14 09:40:57.674732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.856 [2024-07-14 09:40:57.674758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:41496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.856 [2024-07-14 09:40:57.674783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.856 [2024-07-14 09:40:57.674809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:41880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.856 [2024-07-14 09:40:57.674839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.856 [2024-07-14 09:40:57.674891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:41888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.856 [2024-07-14 09:40:57.674918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.856 [2024-07-14 09:40:57.674946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.856 [2024-07-14 09:40:57.674971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.856 [2024-07-14 09:40:57.674998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:41904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.856 [2024-07-14 09:40:57.675023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.856 [2024-07-14 09:40:57.675050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:41912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.856 [2024-07-14 09:40:57.675074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.856 [2024-07-14 09:40:57.675102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:41920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.856 [2024-07-14 09:40:57.675131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.856 [2024-07-14 09:40:57.675160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:41928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.856 [2024-07-14 09:40:57.675197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.856 [2024-07-14 09:40:57.675225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:41936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.856 [2024-07-14 09:40:57.675250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.856 [2024-07-14 09:40:57.675277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:41944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.856 [2024-07-14 09:40:57.675300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.856 [2024-07-14 09:40:57.675327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:41952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.856 [2024-07-14 09:40:57.675351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.856 [2024-07-14 09:40:57.675377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:41960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.856 [2024-07-14 09:40:57.675400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.856 [2024-07-14 09:40:57.675426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:41968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.856 [2024-07-14 09:40:57.675450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.856 [2024-07-14 09:40:57.675475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:41976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.856 [2024-07-14 09:40:57.675498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.856 [2024-07-14 09:40:57.675523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:41984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.856 [2024-07-14 09:40:57.675549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.856 [2024-07-14 09:40:57.675574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.856 [2024-07-14 09:40:57.675598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.856 [2024-07-14 09:40:57.675626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:42000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.856 [2024-07-14 09:40:57.675651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.856 [2024-07-14 09:40:57.675677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:42008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.856 [2024-07-14 09:40:57.675708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.856 [2024-07-14 09:40:57.675734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:42016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.856 [2024-07-14 09:40:57.675759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.856 [2024-07-14 09:40:57.675785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:42024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.856 [2024-07-14 09:40:57.675816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.856 [2024-07-14 09:40:57.675844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:42032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.856 [2024-07-14 09:40:57.675890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.856 [2024-07-14 09:40:57.675921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:42040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.856 [2024-07-14 09:40:57.675946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.856 [2024-07-14 09:40:57.675974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:42048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.856 [2024-07-14 09:40:57.675999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.857 [2024-07-14 09:40:57.676026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:42056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.857 [2024-07-14 09:40:57.676052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.857 [2024-07-14 09:40:57.676080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:41504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.857 [2024-07-14 09:40:57.676105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.857 [2024-07-14 09:40:57.676131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.857 [2024-07-14 09:40:57.676156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.857 [2024-07-14 09:40:57.676195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:41520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.857 [2024-07-14 09:40:57.676220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.857 [2024-07-14 09:40:57.676245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:41528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.857 [2024-07-14 09:40:57.676270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.857 [2024-07-14 09:40:57.676295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:41536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.857 [2024-07-14 09:40:57.676320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.857 [2024-07-14 09:40:57.676346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:41544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.857 [2024-07-14 09:40:57.676371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.857 [2024-07-14 09:40:57.676396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:41552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.857 [2024-07-14 09:40:57.676423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.857 [2024-07-14 09:40:57.676448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:41560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.857 [2024-07-14 09:40:57.676473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.857 [2024-07-14 09:40:57.676505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:41568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.857 [2024-07-14 09:40:57.676529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.857 [2024-07-14 09:40:57.676555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:41576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.857 [2024-07-14 09:40:57.676582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.857 [2024-07-14 09:40:57.676607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:41584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.857 [2024-07-14 09:40:57.676632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.857 [2024-07-14 09:40:57.676657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:41592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.857 [2024-07-14 09:40:57.676682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.857 [2024-07-14 09:40:57.676708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:41600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.857 [2024-07-14 09:40:57.676732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.857 [2024-07-14 09:40:57.676757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.857 [2024-07-14 09:40:57.676784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.857 [2024-07-14 09:40:57.676810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:41616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:19.857 [2024-07-14 09:40:57.676837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.857 [2024-07-14 09:40:57.676887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:42064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.857 [2024-07-14 09:40:57.676914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.857 [2024-07-14 09:40:57.676940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:42072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.857 [2024-07-14 09:40:57.676966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.857 [2024-07-14 09:40:57.676992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:42080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.857 [2024-07-14 09:40:57.677017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.857 [2024-07-14 09:40:57.677042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:42088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.857 [2024-07-14 09:40:57.677069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.857 [2024-07-14 09:40:57.677093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:42096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.857 [2024-07-14 09:40:57.677119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.857 [2024-07-14 09:40:57.677144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.857 [2024-07-14 09:40:57.677189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.857 [2024-07-14 09:40:57.677215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:42112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.857 [2024-07-14 09:40:57.677240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.857 [2024-07-14 09:40:57.677266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:42120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.857 [2024-07-14 09:40:57.677291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.857 [2024-07-14 09:40:57.677316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:42128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.857 [2024-07-14 09:40:57.677340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.857 [2024-07-14 09:40:57.677365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:42136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.857 [2024-07-14 09:40:57.677390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.857 [2024-07-14 09:40:57.677414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:42144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.857 [2024-07-14 09:40:57.677438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.857 [2024-07-14 09:40:57.677464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:42152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.857 [2024-07-14 09:40:57.677488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.857 [2024-07-14 09:40:57.677514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:42160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.857 [2024-07-14 09:40:57.677537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.857 [2024-07-14 09:40:57.677564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:42168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.857 [2024-07-14 09:40:57.677587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.857 [2024-07-14 09:40:57.677615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:42176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.857 [2024-07-14 09:40:57.677637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.857 [2024-07-14 09:40:57.677665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:42184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.857 [2024-07-14 09:40:57.677688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.857 [2024-07-14 09:40:57.677715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:42192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.857 [2024-07-14 09:40:57.677737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.857 [2024-07-14 09:40:57.677764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:42200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.857 [2024-07-14 09:40:57.677787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.857 [2024-07-14 09:40:57.677819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.857 [2024-07-14 09:40:57.677844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.857 [2024-07-14 09:40:57.677895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:42216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.857 [2024-07-14 09:40:57.677922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.857 [2024-07-14 09:40:57.677950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:42224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.857 [2024-07-14 09:40:57.677975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.857 [2024-07-14 09:40:57.678003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:42232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.857 [2024-07-14 09:40:57.678027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.857 [2024-07-14 09:40:57.678055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:42240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.857 [2024-07-14 09:40:57.678078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.857 [2024-07-14 09:40:57.678107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:42248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.857 [2024-07-14 09:40:57.678131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.857 [2024-07-14 09:40:57.678159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:42256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.857 [2024-07-14 09:40:57.678197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.857 [2024-07-14 09:40:57.678225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:42264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.857 [2024-07-14 09:40:57.678248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.857 [2024-07-14 09:40:57.678275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:42272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.857 [2024-07-14 09:40:57.678299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.858 [2024-07-14 09:40:57.678326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:42280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.858 [2024-07-14 09:40:57.678349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.858 [2024-07-14 09:40:57.678377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:42288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.858 [2024-07-14 09:40:57.678400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.858 [2024-07-14 09:40:57.678427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:42296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.858 [2024-07-14 09:40:57.678450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.858 [2024-07-14 09:40:57.678478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:42304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.858 [2024-07-14 09:40:57.678503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.858 [2024-07-14 09:40:57.678536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:42312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.858 [2024-07-14 09:40:57.678560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.858 [2024-07-14 09:40:57.678587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:42320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.858 [2024-07-14 09:40:57.678610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.858 [2024-07-14 09:40:57.678638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:42328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.858 [2024-07-14 09:40:57.678662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.858 [2024-07-14 09:40:57.678690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:42336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.858 [2024-07-14 09:40:57.678714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.858 [2024-07-14 09:40:57.678742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:42344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.858 [2024-07-14 09:40:57.678766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.858 [2024-07-14 09:40:57.678794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:42352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.858 [2024-07-14 09:40:57.678818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.858 [2024-07-14 09:40:57.678845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:42360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.858 [2024-07-14 09:40:57.678891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.858 [2024-07-14 09:40:57.678927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:42368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.858 [2024-07-14 09:40:57.678951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.858 [2024-07-14 09:40:57.678980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:42376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.858 [2024-07-14 09:40:57.679005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.858 [2024-07-14 09:40:57.679033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:42384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.858 [2024-07-14 09:40:57.679057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.858 [2024-07-14 09:40:57.679085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:42392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.858 [2024-07-14 09:40:57.679109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.858 [2024-07-14 09:40:57.679138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:42400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.858 [2024-07-14 09:40:57.679163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.858 [2024-07-14 09:40:57.679207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:42408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.858 [2024-07-14 09:40:57.679242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.858 [2024-07-14 09:40:57.679269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:42416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.858 [2024-07-14 09:40:57.679293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.858 [2024-07-14 09:40:57.679319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:42424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.858 [2024-07-14 09:40:57.679343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.858 [2024-07-14 09:40:57.679369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:42432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.858 [2024-07-14 09:40:57.679393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.858 [2024-07-14 09:40:57.679418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:42440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:19.858 [2024-07-14 09:40:57.679443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.858 [2024-07-14 09:40:57.679488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.858 [2024-07-14 09:40:57.679514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42448 len:8 PRP1 0x0 PRP2 0x0 00:31:19.858 [2024-07-14 09:40:57.679536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.858 [2024-07-14 09:40:57.679566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.858 [2024-07-14 09:40:57.679586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.858 [2024-07-14 09:40:57.679607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42456 len:8 PRP1 0x0 PRP2 0x0 00:31:19.858 [2024-07-14 09:40:57.679629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.858 [2024-07-14 09:40:57.679652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.858 [2024-07-14 09:40:57.679673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.858 [2024-07-14 09:40:57.679693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42464 len:8 PRP1 0x0 PRP2 0x0 00:31:19.858 [2024-07-14 09:40:57.679715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.858 [2024-07-14 09:40:57.679739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.858 [2024-07-14 09:40:57.679758] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.858 [2024-07-14 09:40:57.679778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42472 len:8 PRP1 0x0 PRP2 0x0 00:31:19.858 [2024-07-14 09:40:57.679799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.858 [2024-07-14 09:40:57.679825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.858 [2024-07-14 09:40:57.679844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.858 [2024-07-14 09:40:57.679864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42480 len:8 PRP1 0x0 PRP2 0x0 00:31:19.858 [2024-07-14 09:40:57.679909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.858 [2024-07-14 09:40:57.679939] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.858 [2024-07-14 09:40:57.679967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.858 [2024-07-14 09:40:57.679989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42488 len:8 PRP1 0x0 PRP2 0x0 00:31:19.858 [2024-07-14 09:40:57.680013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.858 [2024-07-14 09:40:57.680038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.858 [2024-07-14 09:40:57.680060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.858 [2024-07-14 09:40:57.680080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42496 len:8 PRP1 0x0 PRP2 0x0 00:31:19.858 [2024-07-14 09:40:57.680103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.858 [2024-07-14 09:40:57.680128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.858 [2024-07-14 09:40:57.680147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.858 [2024-07-14 09:40:57.680169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42504 len:8 PRP1 0x0 PRP2 0x0 00:31:19.858 [2024-07-14 09:40:57.680205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.858 [2024-07-14 09:40:57.680231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:19.858 [2024-07-14 09:40:57.680251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:19.858 [2024-07-14 09:40:57.680271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42512 len:8 PRP1 0x0 PRP2 0x0 00:31:19.858 [2024-07-14 09:40:57.680292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.858 [2024-07-14 09:40:57.680366] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a25300 was disconnected and freed. reset controller. 00:31:19.858 [2024-07-14 09:40:57.680396] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:31:19.858 [2024-07-14 09:40:57.680457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.858 [2024-07-14 09:40:57.680484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.858 [2024-07-14 09:40:57.680511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.858 [2024-07-14 09:40:57.680534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.858 [2024-07-14 09:40:57.680560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.858 [2024-07-14 09:40:57.680582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.858 [2024-07-14 09:40:57.680607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.858 [2024-07-14 09:40:57.680629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.858 [2024-07-14 09:40:57.680653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:19.858 [2024-07-14 09:40:57.680706] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19f5830 (9): Bad file descriptor 00:31:19.858 [2024-07-14 09:40:57.684738] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:19.858 [2024-07-14 09:40:57.714771] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:19.858 00:31:19.858 Latency(us) 00:31:19.859 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:19.859 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:19.859 Verification LBA range: start 0x0 length 0x4000 00:31:19.859 NVMe0n1 : 15.01 8778.03 34.29 560.57 0.00 13678.22 1080.13 16602.45 00:31:19.859 =================================================================================================================== 00:31:19.859 Total : 8778.03 34.29 560.57 0.00 13678.22 1080.13 16602.45 00:31:19.859 Received shutdown signal, test time was about 15.000000 seconds 00:31:19.859 00:31:19.859 Latency(us) 00:31:19.859 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:19.859 =================================================================================================================== 00:31:19.859 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:19.859 09:41:03 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:31:19.859 09:41:03 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:31:19.859 09:41:03 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:31:19.859 09:41:03 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=858485 00:31:19.859 09:41:03 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:31:19.859 09:41:03 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 858485 /var/tmp/bdevperf.sock 00:31:19.859 09:41:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 858485 ']' 00:31:19.859 09:41:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:19.859 09:41:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:19.859 09:41:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:19.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:19.859 09:41:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:19.859 09:41:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:19.859 09:41:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:19.859 09:41:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:31:19.859 09:41:03 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:19.859 [2024-07-14 09:41:04.223303] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:19.859 09:41:04 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:20.116 [2024-07-14 09:41:04.459994] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:20.116 09:41:04 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:20.679 NVMe0n1 00:31:20.679 09:41:04 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:20.935 00:31:20.935 09:41:05 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:21.499 00:31:21.499 09:41:05 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:21.499 09:41:05 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:31:21.499 09:41:05 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:21.756 09:41:06 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:31:25.032 09:41:09 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:25.032 09:41:09 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:31:25.032 09:41:09 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=859149 00:31:25.032 09:41:09 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:25.032 09:41:09 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 859149 00:31:26.404 0 00:31:26.404 09:41:10 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:26.404 [2024-07-14 09:41:03.739715] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:31:26.404 [2024-07-14 09:41:03.739802] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid858485 ] 00:31:26.404 EAL: No free 2048 kB hugepages reported on node 1 00:31:26.404 [2024-07-14 09:41:03.798314] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:26.404 [2024-07-14 09:41:03.880932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:26.404 [2024-07-14 09:41:06.174160] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:26.404 [2024-07-14 09:41:06.174262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:26.404 [2024-07-14 09:41:06.174292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.404 [2024-07-14 09:41:06.174331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:26.404 [2024-07-14 09:41:06.174354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.404 [2024-07-14 09:41:06.174378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:26.404 [2024-07-14 09:41:06.174401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.404 [2024-07-14 09:41:06.174424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:26.404 [2024-07-14 09:41:06.174447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.404 [2024-07-14 09:41:06.174477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:26.404 [2024-07-14 09:41:06.174539] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:26.404 [2024-07-14 09:41:06.174581] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a8f830 (9): Bad file descriptor 00:31:26.404 [2024-07-14 09:41:06.186087] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:26.404 Running I/O for 1 seconds... 00:31:26.404 00:31:26.404 Latency(us) 00:31:26.404 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:26.404 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:26.404 Verification LBA range: start 0x0 length 0x4000 00:31:26.404 NVMe0n1 : 1.02 6435.12 25.14 0.00 0.00 19799.40 3786.52 17087.91 00:31:26.404 =================================================================================================================== 00:31:26.404 Total : 6435.12 25.14 0.00 0.00 19799.40 3786.52 17087.91 00:31:26.404 09:41:10 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:26.404 09:41:10 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:31:26.404 09:41:10 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:26.662 09:41:11 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:26.662 09:41:11 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:31:26.919 09:41:11 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:27.177 09:41:11 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:31:30.455 09:41:14 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:30.455 09:41:14 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:31:30.455 09:41:14 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 858485 00:31:30.455 09:41:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 858485 ']' 00:31:30.455 09:41:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 858485 00:31:30.455 09:41:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:31:30.455 09:41:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:30.455 09:41:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 858485 00:31:30.455 09:41:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:30.455 09:41:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:30.455 09:41:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 858485' 00:31:30.455 killing process with pid 858485 00:31:30.455 09:41:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 858485 00:31:30.455 09:41:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 858485 00:31:30.747 09:41:15 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:31:30.747 09:41:15 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:31.005 09:41:15 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:31:31.005 09:41:15 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:31.005 09:41:15 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:31:31.005 09:41:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:31.005 09:41:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:31:31.005 09:41:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:31.005 09:41:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:31:31.005 09:41:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:31.005 09:41:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:31.005 rmmod nvme_tcp 00:31:31.005 rmmod nvme_fabrics 00:31:31.005 rmmod nvme_keyring 00:31:31.005 09:41:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:31.005 09:41:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:31:31.005 09:41:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:31:31.005 09:41:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 856224 ']' 00:31:31.005 09:41:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 856224 00:31:31.005 09:41:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 856224 ']' 00:31:31.005 09:41:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 856224 00:31:31.005 09:41:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:31:31.005 09:41:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:31.005 09:41:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 856224 00:31:31.005 09:41:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:31.005 09:41:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:31.005 09:41:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 856224' 00:31:31.005 killing process with pid 856224 00:31:31.005 09:41:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 856224 00:31:31.005 09:41:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 856224 00:31:31.264 09:41:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:31.264 09:41:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:31.264 09:41:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:31.264 09:41:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:31.264 09:41:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:31.264 09:41:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:31.264 09:41:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:31.264 09:41:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:33.800 09:41:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:33.800 00:31:33.800 real 0m34.742s 00:31:33.800 user 2m1.969s 00:31:33.800 sys 0m5.952s 00:31:33.800 09:41:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:33.800 09:41:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:33.800 ************************************ 00:31:33.800 END TEST nvmf_failover 00:31:33.800 ************************************ 00:31:33.800 09:41:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:33.800 09:41:17 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:33.800 09:41:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:33.800 09:41:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:33.800 09:41:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:33.800 ************************************ 00:31:33.800 START TEST nvmf_host_discovery 00:31:33.800 ************************************ 00:31:33.800 09:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:33.800 * Looking for test storage... 00:31:33.800 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:33.800 09:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:33.800 09:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:31:33.800 09:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:33.800 09:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:33.800 09:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:33.800 09:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:33.800 09:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:33.800 09:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:33.800 09:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:33.800 09:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:33.800 09:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:33.800 09:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:33.801 09:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:33.801 09:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:33.801 09:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:33.801 09:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:33.801 09:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:33.801 09:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:33.801 09:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:33.801 09:41:17 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:33.801 09:41:17 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:33.801 09:41:17 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:33.801 09:41:17 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.801 09:41:17 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.801 09:41:17 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.801 09:41:17 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:31:33.801 09:41:17 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.801 09:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:31:33.801 09:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:33.801 09:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:33.801 09:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:33.801 09:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:33.801 09:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:33.801 09:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:33.801 09:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:33.801 09:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:33.801 09:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:31:33.801 09:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:31:33.801 09:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:31:33.801 09:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:31:33.801 09:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:31:33.801 09:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:31:33.801 09:41:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:31:33.801 09:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:33.801 09:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:33.801 09:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:33.801 09:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:33.801 09:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:33.801 09:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:33.801 09:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:33.801 09:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:33.801 09:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:33.801 09:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:33.801 09:41:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:31:33.801 09:41:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:35.702 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:35.702 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:35.702 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:35.702 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:35.702 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:35.702 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:31:35.702 00:31:35.702 --- 10.0.0.2 ping statistics --- 00:31:35.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:35.702 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:31:35.702 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:35.702 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:35.702 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:31:35.702 00:31:35.702 --- 10.0.0.1 ping statistics --- 00:31:35.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:35.703 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:31:35.703 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:35.703 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:31:35.703 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:35.703 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:35.703 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:35.703 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:35.703 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:35.703 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:35.703 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:35.703 09:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:31:35.703 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:35.703 09:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:35.703 09:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:35.703 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=861748 00:31:35.703 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:35.703 09:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 861748 00:31:35.703 09:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 861748 ']' 00:31:35.703 09:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:35.703 09:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:35.703 09:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:35.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:35.703 09:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:35.703 09:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:35.703 [2024-07-14 09:41:19.922890] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:31:35.703 [2024-07-14 09:41:19.922989] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:35.703 EAL: No free 2048 kB hugepages reported on node 1 00:31:35.703 [2024-07-14 09:41:19.988126] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:35.703 [2024-07-14 09:41:20.078129] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:35.703 [2024-07-14 09:41:20.078209] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:35.703 [2024-07-14 09:41:20.078229] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:35.703 [2024-07-14 09:41:20.078241] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:35.703 [2024-07-14 09:41:20.078264] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:35.703 [2024-07-14 09:41:20.078290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:35.961 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:35.961 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:31:35.961 09:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:35.961 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:35.961 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:35.961 09:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:35.961 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:35.961 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.961 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:35.961 [2024-07-14 09:41:20.222170] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:35.961 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.962 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:31:35.962 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.962 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:35.962 [2024-07-14 09:41:20.230352] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:35.962 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.962 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:31:35.962 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.962 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:35.962 null0 00:31:35.962 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.962 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:31:35.962 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.962 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:35.962 null1 00:31:35.962 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.962 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:31:35.962 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.962 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:35.962 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.962 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=861768 00:31:35.962 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:31:35.962 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 861768 /tmp/host.sock 00:31:35.962 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 861768 ']' 00:31:35.962 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:31:35.962 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:35.962 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:35.962 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:35.962 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:35.962 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:35.962 [2024-07-14 09:41:20.301949] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:31:35.962 [2024-07-14 09:41:20.302030] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid861768 ] 00:31:35.962 EAL: No free 2048 kB hugepages reported on node 1 00:31:35.962 [2024-07-14 09:41:20.363184] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:36.220 [2024-07-14 09:41:20.455402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:36.220 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:36.220 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:31:36.220 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:36.220 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:31:36.220 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.220 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:36.220 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.220 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:31:36.220 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.220 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:36.220 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.220 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:31:36.220 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:31:36.220 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:36.220 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.220 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:36.220 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:36.220 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:36.220 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:36.220 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.220 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:31:36.220 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:31:36.220 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:36.220 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:36.220 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.220 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:36.220 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:36.220 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:36.220 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.220 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:31:36.220 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:31:36.220 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.220 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:36.220 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.220 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:36.478 [2024-07-14 09:41:20.844002] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:36.478 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.479 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:36.479 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:36.479 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:36.479 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:36.479 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.735 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:31:36.735 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:31:36.735 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:36.735 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:36.735 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:36.735 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:36.735 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:36.735 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:36.735 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:36.735 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:36.735 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.735 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:36.735 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:36.735 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.735 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:36.735 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:31:36.735 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:36.735 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:36.735 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:31:36.735 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.735 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:36.735 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.735 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:36.735 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:36.735 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:36.735 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:36.735 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:36.735 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:31:36.735 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:36.735 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.735 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:36.735 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:36.735 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:36.735 09:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:36.735 09:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.735 09:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:31:36.735 09:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:31:37.299 [2024-07-14 09:41:21.600244] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:37.299 [2024-07-14 09:41:21.600273] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:37.299 [2024-07-14 09:41:21.600302] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:37.299 [2024-07-14 09:41:21.686586] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:37.557 [2024-07-14 09:41:21.791587] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:37.557 [2024-07-14 09:41:21.791614] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:37.821 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:37.821 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:37.821 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:31:37.821 09:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:37.821 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.821 09:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:37.821 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:37.821 09:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:37.821 09:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:37.821 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.821 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:37.821 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:37.821 09:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:37.821 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:37.821 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:37.821 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:37.821 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:31:37.821 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:31:37.821 09:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:37.821 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.821 09:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:37.821 09:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:37.821 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:37.821 09:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:37.821 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.821 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:31:37.821 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:37.821 09:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:37.821 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:37.821 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:37.821 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:37.821 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:31:37.821 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:31:37.821 09:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:37.821 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.821 09:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:37.821 09:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:37.821 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:37.821 09:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:37.822 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:38.080 09:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:38.080 09:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:38.080 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:38.080 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:38.080 09:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:31:38.080 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:38.080 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:38.080 [2024-07-14 09:41:22.304435] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:38.080 [2024-07-14 09:41:22.304934] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:38.080 [2024-07-14 09:41:22.304977] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:38.080 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:38.080 09:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:38.080 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:38.080 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:38.080 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:38.080 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:38.080 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:31:38.080 09:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:38.080 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:38.080 09:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:38.080 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:38.080 09:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:38.080 09:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:38.080 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:38.080 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:38.080 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:38.080 09:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:38.080 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:38.080 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:38.080 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:38.080 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:38.080 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:31:38.080 09:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:38.080 09:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:38.080 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:38.080 09:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:38.080 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:38.081 09:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:38.081 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:38.081 [2024-07-14 09:41:22.391256] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:31:38.081 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:38.081 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:38.081 09:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:38.081 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:38.081 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:38.081 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:38.081 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:38.081 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:31:38.081 09:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:38.081 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:38.081 09:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:38.081 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:38.081 09:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:38.081 09:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:38.081 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:38.081 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:31:38.081 09:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:31:38.081 [2024-07-14 09:41:22.453864] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:38.081 [2024-07-14 09:41:22.453912] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:38.081 [2024-07-14 09:41:22.453921] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:39.022 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:39.022 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:39.022 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:31:39.022 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:39.022 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:39.022 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.022 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:39.022 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:39.022 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:39.022 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.280 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:31:39.280 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:39.280 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:31:39.280 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:39.280 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:39.280 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:39.280 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:39.280 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:39.280 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:39.280 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:39.280 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:39.280 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:39.280 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.280 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:39.280 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.280 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:39.280 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:39.280 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:39.280 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:39.280 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:39.280 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.280 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:39.280 [2024-07-14 09:41:23.528412] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:39.280 [2024-07-14 09:41:23.528447] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:39.280 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.280 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:39.280 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:39.280 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:39.280 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:39.280 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:39.280 [2024-07-14 09:41:23.533077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:39.280 [2024-07-14 09:41:23.533106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.280 [2024-07-14 09:41:23.533128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:39.280 [2024-07-14 09:41:23.533154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.280 [2024-07-14 09:41:23.533183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:39.280 [2024-07-14 09:41:23.533206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.280 [2024-07-14 09:41:23.533230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:39.280 [2024-07-14 09:41:23.533254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.280 [2024-07-14 09:41:23.533271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1594530 is same with the state(5) to be set 00:31:39.280 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:31:39.280 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:39.280 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:39.280 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.280 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:39.280 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:39.280 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:39.280 [2024-07-14 09:41:23.543073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1594530 (9): Bad file descriptor 00:31:39.280 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.280 [2024-07-14 09:41:23.553111] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:39.280 [2024-07-14 09:41:23.553398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.280 [2024-07-14 09:41:23.553427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1594530 with addr=10.0.0.2, port=4420 00:31:39.280 [2024-07-14 09:41:23.553449] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1594530 is same with the state(5) to be set 00:31:39.280 [2024-07-14 09:41:23.553473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1594530 (9): Bad file descriptor 00:31:39.281 [2024-07-14 09:41:23.553506] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:39.281 [2024-07-14 09:41:23.553524] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:39.281 [2024-07-14 09:41:23.553554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:39.281 [2024-07-14 09:41:23.553574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:39.281 [2024-07-14 09:41:23.563200] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:39.281 [2024-07-14 09:41:23.563475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.281 [2024-07-14 09:41:23.563505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1594530 with addr=10.0.0.2, port=4420 00:31:39.281 [2024-07-14 09:41:23.563523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1594530 is same with the state(5) to be set 00:31:39.281 [2024-07-14 09:41:23.563548] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1594530 (9): Bad file descriptor 00:31:39.281 [2024-07-14 09:41:23.563584] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:39.281 [2024-07-14 09:41:23.563604] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:39.281 [2024-07-14 09:41:23.563619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:39.281 [2024-07-14 09:41:23.563640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:39.281 [2024-07-14 09:41:23.573291] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:39.281 [2024-07-14 09:41:23.573555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.281 [2024-07-14 09:41:23.573587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1594530 with addr=10.0.0.2, port=4420 00:31:39.281 [2024-07-14 09:41:23.573606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1594530 is same with the state(5) to be set 00:31:39.281 [2024-07-14 09:41:23.573631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1594530 (9): Bad file descriptor 00:31:39.281 [2024-07-14 09:41:23.573680] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:39.281 [2024-07-14 09:41:23.573703] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:39.281 [2024-07-14 09:41:23.573718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:39.281 [2024-07-14 09:41:23.573739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:39.281 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:39.281 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:39.281 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:39.281 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:39.281 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:39.281 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:39.281 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:39.281 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:31:39.281 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:39.281 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:39.281 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.281 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:39.281 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:39.281 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:39.281 [2024-07-14 09:41:23.584006] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:39.281 [2024-07-14 09:41:23.584269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.281 [2024-07-14 09:41:23.584297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1594530 with addr=10.0.0.2, port=4420 00:31:39.281 [2024-07-14 09:41:23.584315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1594530 is same with the state(5) to be set 00:31:39.281 [2024-07-14 09:41:23.584337] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1594530 (9): Bad file descriptor 00:31:39.281 [2024-07-14 09:41:23.584358] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:39.281 [2024-07-14 09:41:23.584371] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:39.281 [2024-07-14 09:41:23.584401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:39.281 [2024-07-14 09:41:23.584420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:39.281 [2024-07-14 09:41:23.594078] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:39.281 [2024-07-14 09:41:23.594327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.281 [2024-07-14 09:41:23.594354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1594530 with addr=10.0.0.2, port=4420 00:31:39.281 [2024-07-14 09:41:23.594370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1594530 is same with the state(5) to be set 00:31:39.281 [2024-07-14 09:41:23.594392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1594530 (9): Bad file descriptor 00:31:39.281 [2024-07-14 09:41:23.594411] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:39.281 [2024-07-14 09:41:23.594425] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:39.281 [2024-07-14 09:41:23.594437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:39.281 [2024-07-14 09:41:23.594455] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:39.281 [2024-07-14 09:41:23.604147] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:39.281 [2024-07-14 09:41:23.604388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.281 [2024-07-14 09:41:23.604415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1594530 with addr=10.0.0.2, port=4420 00:31:39.281 [2024-07-14 09:41:23.604431] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1594530 is same with the state(5) to be set 00:31:39.281 [2024-07-14 09:41:23.604453] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1594530 (9): Bad file descriptor 00:31:39.281 [2024-07-14 09:41:23.604474] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:39.281 [2024-07-14 09:41:23.604487] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:39.281 [2024-07-14 09:41:23.604499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:39.281 [2024-07-14 09:41:23.604523] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:39.281 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.281 [2024-07-14 09:41:23.614230] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:39.281 [2024-07-14 09:41:23.614526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:39.281 [2024-07-14 09:41:23.614553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1594530 with addr=10.0.0.2, port=4420 00:31:39.281 [2024-07-14 09:41:23.614569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1594530 is same with the state(5) to be set 00:31:39.281 [2024-07-14 09:41:23.614591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1594530 (9): Bad file descriptor 00:31:39.281 [2024-07-14 09:41:23.614612] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:39.281 [2024-07-14 09:41:23.614626] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:39.281 [2024-07-14 09:41:23.614640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:39.281 [2024-07-14 09:41:23.614660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:39.281 [2024-07-14 09:41:23.615111] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:31:39.281 [2024-07-14 09:41:23.615139] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:39.281 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:39.281 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:39.281 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:39.281 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:39.281 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:39.281 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:39.281 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:31:39.281 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:31:39.281 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:39.281 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:39.281 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.281 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:39.281 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:39.281 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:39.281 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.281 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:31:39.281 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:39.281 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:31:39.281 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:39.281 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:39.281 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:39.281 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:39.281 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:39.281 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:39.281 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:39.281 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:39.281 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:39.281 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.281 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:39.281 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.281 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:39.282 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:39.282 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:39.282 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:39.282 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:31:39.282 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.282 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:39.282 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.282 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:31:39.282 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:31:39.282 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:39.282 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:39.282 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:31:39.282 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:31:39.282 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:39.282 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:39.282 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.282 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:39.282 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:39.282 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:39.282 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.539 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:31:39.539 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:39.539 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:31:39.539 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:31:39.539 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:39.539 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:39.539 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:31:39.539 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:31:39.539 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:39.539 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.539 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:39.539 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:39.539 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:39.539 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:39.539 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.539 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:31:39.539 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:39.539 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:31:39.539 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:31:39.539 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:39.540 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:39.540 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:39.540 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:39.540 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:39.540 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:39.540 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:39.540 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:39.540 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.540 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:39.540 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.540 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:31:39.540 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:31:39.540 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:39.540 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:39.540 09:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:39.540 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.540 09:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:40.472 [2024-07-14 09:41:24.900132] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:40.472 [2024-07-14 09:41:24.900192] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:40.472 [2024-07-14 09:41:24.900214] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:40.730 [2024-07-14 09:41:24.986481] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:31:40.730 [2024-07-14 09:41:25.094701] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:40.730 [2024-07-14 09:41:25.094744] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:40.730 09:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.730 09:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:40.730 09:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:40.730 09:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:40.730 09:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:40.730 09:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:40.730 09:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:40.730 09:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:40.730 09:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:40.730 09:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.730 09:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:40.730 request: 00:31:40.730 { 00:31:40.730 "name": "nvme", 00:31:40.730 "trtype": "tcp", 00:31:40.730 "traddr": "10.0.0.2", 00:31:40.730 "adrfam": "ipv4", 00:31:40.730 "trsvcid": "8009", 00:31:40.730 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:40.730 "wait_for_attach": true, 00:31:40.730 "method": "bdev_nvme_start_discovery", 00:31:40.730 "req_id": 1 00:31:40.730 } 00:31:40.730 Got JSON-RPC error response 00:31:40.730 response: 00:31:40.730 { 00:31:40.730 "code": -17, 00:31:40.730 "message": "File exists" 00:31:40.730 } 00:31:40.730 09:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:40.730 09:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:40.730 09:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:40.730 09:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:40.730 09:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:40.730 09:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:31:40.730 09:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:40.730 09:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:40.730 09:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.730 09:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:40.730 09:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:40.730 09:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:40.730 09:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.730 09:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:31:40.731 09:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:31:40.731 09:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:40.731 09:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:40.731 09:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.731 09:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:40.731 09:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:40.731 09:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:40.731 09:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.988 09:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:40.988 09:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:40.988 09:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:40.988 09:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:40.988 09:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:40.988 09:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:40.988 09:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:40.988 09:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:40.988 09:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:40.988 09:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.988 09:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:40.988 request: 00:31:40.988 { 00:31:40.988 "name": "nvme_second", 00:31:40.988 "trtype": "tcp", 00:31:40.988 "traddr": "10.0.0.2", 00:31:40.988 "adrfam": "ipv4", 00:31:40.988 "trsvcid": "8009", 00:31:40.988 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:40.988 "wait_for_attach": true, 00:31:40.988 "method": "bdev_nvme_start_discovery", 00:31:40.988 "req_id": 1 00:31:40.988 } 00:31:40.989 Got JSON-RPC error response 00:31:40.989 response: 00:31:40.989 { 00:31:40.989 "code": -17, 00:31:40.989 "message": "File exists" 00:31:40.989 } 00:31:40.989 09:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:40.989 09:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:40.989 09:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:40.989 09:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:40.989 09:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:40.989 09:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:31:40.989 09:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:40.989 09:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:40.989 09:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.989 09:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:40.989 09:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:40.989 09:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:40.989 09:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.989 09:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:31:40.989 09:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:31:40.989 09:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:40.989 09:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:40.989 09:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.989 09:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:40.989 09:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:40.989 09:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:40.989 09:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.989 09:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:40.989 09:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:40.989 09:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:40.989 09:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:40.989 09:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:40.989 09:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:40.989 09:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:40.989 09:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:40.989 09:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:40.989 09:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.989 09:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:41.922 [2024-07-14 09:41:26.299097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:41.922 [2024-07-14 09:41:26.299151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1592020 with addr=10.0.0.2, port=8010 00:31:41.922 [2024-07-14 09:41:26.299179] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:41.922 [2024-07-14 09:41:26.299192] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:41.922 [2024-07-14 09:41:26.299205] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:42.855 [2024-07-14 09:41:27.301550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:42.855 [2024-07-14 09:41:27.301585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1592020 with addr=10.0.0.2, port=8010 00:31:42.855 [2024-07-14 09:41:27.301620] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:42.855 [2024-07-14 09:41:27.301633] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:42.855 [2024-07-14 09:41:27.301644] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:44.230 [2024-07-14 09:41:28.303684] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:31:44.230 request: 00:31:44.230 { 00:31:44.230 "name": "nvme_second", 00:31:44.230 "trtype": "tcp", 00:31:44.230 "traddr": "10.0.0.2", 00:31:44.230 "adrfam": "ipv4", 00:31:44.230 "trsvcid": "8010", 00:31:44.231 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:44.231 "wait_for_attach": false, 00:31:44.231 "attach_timeout_ms": 3000, 00:31:44.231 "method": "bdev_nvme_start_discovery", 00:31:44.231 "req_id": 1 00:31:44.231 } 00:31:44.231 Got JSON-RPC error response 00:31:44.231 response: 00:31:44.231 { 00:31:44.231 "code": -110, 00:31:44.231 "message": "Connection timed out" 00:31:44.231 } 00:31:44.231 09:41:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:44.231 09:41:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:44.231 09:41:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:44.231 09:41:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:44.231 09:41:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:44.231 09:41:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:31:44.231 09:41:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:44.231 09:41:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:44.231 09:41:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.231 09:41:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:44.231 09:41:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:44.231 09:41:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:44.231 09:41:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.231 09:41:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:31:44.231 09:41:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:31:44.231 09:41:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 861768 00:31:44.231 09:41:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:31:44.231 09:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:44.231 09:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:31:44.231 09:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:44.231 09:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:31:44.231 09:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:44.231 09:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:44.231 rmmod nvme_tcp 00:31:44.231 rmmod nvme_fabrics 00:31:44.231 rmmod nvme_keyring 00:31:44.231 09:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:44.231 09:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:31:44.231 09:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:31:44.231 09:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 861748 ']' 00:31:44.231 09:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 861748 00:31:44.231 09:41:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 861748 ']' 00:31:44.231 09:41:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 861748 00:31:44.231 09:41:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:31:44.231 09:41:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:44.231 09:41:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 861748 00:31:44.231 09:41:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:44.231 09:41:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:44.231 09:41:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 861748' 00:31:44.231 killing process with pid 861748 00:31:44.231 09:41:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 861748 00:31:44.231 09:41:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 861748 00:31:44.489 09:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:44.489 09:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:44.489 09:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:44.489 09:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:44.489 09:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:44.489 09:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:44.489 09:41:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:44.489 09:41:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:46.392 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:46.392 00:31:46.392 real 0m12.986s 00:31:46.392 user 0m18.802s 00:31:46.392 sys 0m2.744s 00:31:46.392 09:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:46.392 09:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:46.392 ************************************ 00:31:46.392 END TEST nvmf_host_discovery 00:31:46.392 ************************************ 00:31:46.392 09:41:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:46.392 09:41:30 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:46.392 09:41:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:46.392 09:41:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:46.392 09:41:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:46.392 ************************************ 00:31:46.392 START TEST nvmf_host_multipath_status 00:31:46.392 ************************************ 00:31:46.392 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:46.392 * Looking for test storage... 00:31:46.392 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:46.392 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:46.392 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:31:46.392 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:46.392 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:46.650 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:46.650 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:46.650 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:46.651 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:46.651 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:46.651 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:46.651 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:46.651 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:46.651 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:46.651 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:46.651 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:46.651 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:46.651 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:46.651 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:46.651 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:46.651 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:46.651 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:46.651 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:46.651 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.651 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.651 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.651 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:31:46.651 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.651 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:31:46.651 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:46.651 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:46.651 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:46.651 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:46.651 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:46.651 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:46.651 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:46.651 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:46.651 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:46.651 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:46.651 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:46.651 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:31:46.651 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:46.651 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:46.651 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:31:46.651 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:46.651 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:46.651 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:46.651 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:46.651 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:46.651 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:46.651 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:46.651 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:46.651 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:46.651 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:46.651 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:31:46.651 09:41:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:48.557 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:48.557 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:48.557 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:48.557 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:48.557 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:48.558 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:48.558 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:48.558 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:48.558 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:48.558 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:48.558 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:48.558 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:48.558 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:48.558 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:48.558 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:48.558 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:48.558 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:31:48.558 00:31:48.558 --- 10.0.0.2 ping statistics --- 00:31:48.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:48.558 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:31:48.558 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:48.558 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:48.558 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:31:48.558 00:31:48.558 --- 10.0.0.1 ping statistics --- 00:31:48.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:48.558 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:31:48.558 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:48.558 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:31:48.558 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:48.558 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:48.558 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:48.558 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:48.558 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:48.558 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:48.558 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:48.558 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:31:48.558 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:48.558 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:48.558 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:48.558 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=864799 00:31:48.558 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:31:48.558 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 864799 00:31:48.558 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 864799 ']' 00:31:48.558 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:48.558 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:48.558 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:48.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:48.558 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:48.558 09:41:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:48.558 [2024-07-14 09:41:32.996879] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:31:48.558 [2024-07-14 09:41:32.996964] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:48.816 EAL: No free 2048 kB hugepages reported on node 1 00:31:48.816 [2024-07-14 09:41:33.065950] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:48.816 [2024-07-14 09:41:33.160789] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:48.816 [2024-07-14 09:41:33.160851] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:48.816 [2024-07-14 09:41:33.160874] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:48.816 [2024-07-14 09:41:33.160890] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:48.816 [2024-07-14 09:41:33.160920] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:48.816 [2024-07-14 09:41:33.160976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:48.816 [2024-07-14 09:41:33.160981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:49.073 09:41:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:49.073 09:41:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:31:49.073 09:41:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:49.073 09:41:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:49.073 09:41:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:49.073 09:41:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:49.073 09:41:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=864799 00:31:49.073 09:41:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:49.073 [2024-07-14 09:41:33.523409] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:49.331 09:41:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:49.589 Malloc0 00:31:49.589 09:41:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:31:49.847 09:41:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:50.105 09:41:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:50.105 [2024-07-14 09:41:34.547964] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:50.362 09:41:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:50.362 [2024-07-14 09:41:34.784598] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:50.362 09:41:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=865077 00:31:50.362 09:41:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:50.362 09:41:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 865077 /var/tmp/bdevperf.sock 00:31:50.362 09:41:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 865077 ']' 00:31:50.362 09:41:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:31:50.362 09:41:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:50.362 09:41:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:50.363 09:41:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:50.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:50.363 09:41:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:50.363 09:41:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:50.929 09:41:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:50.929 09:41:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:31:50.929 09:41:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:31:50.929 09:41:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:31:51.495 Nvme0n1 00:31:51.495 09:41:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:31:51.753 Nvme0n1 00:31:52.011 09:41:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:31:52.012 09:41:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:31:53.914 09:41:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:31:53.914 09:41:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:54.193 09:41:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:54.459 09:41:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:31:55.392 09:41:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:31:55.392 09:41:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:55.392 09:41:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:55.392 09:41:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:55.650 09:41:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:55.650 09:41:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:55.650 09:41:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:55.650 09:41:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:55.908 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:55.908 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:55.908 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:55.908 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:56.166 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:56.166 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:56.166 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:56.166 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:56.423 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:56.423 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:56.423 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:56.423 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:56.681 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:56.681 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:56.681 09:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:56.681 09:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:56.939 09:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:56.939 09:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:31:56.939 09:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:57.197 09:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:57.455 09:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:31:58.388 09:41:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:31:58.388 09:41:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:58.388 09:41:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:58.388 09:41:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:58.645 09:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:58.645 09:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:58.645 09:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:58.645 09:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:58.902 09:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:58.902 09:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:58.902 09:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:58.902 09:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:59.159 09:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:59.159 09:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:59.159 09:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:59.159 09:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:59.416 09:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:59.416 09:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:59.416 09:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:59.416 09:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:59.673 09:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:59.673 09:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:59.673 09:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:59.673 09:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:59.932 09:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:59.932 09:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:31:59.932 09:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:00.190 09:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:32:00.446 09:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:32:01.377 09:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:32:01.377 09:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:01.377 09:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:01.377 09:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:01.635 09:41:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:01.635 09:41:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:01.635 09:41:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:01.635 09:41:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:01.893 09:41:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:01.893 09:41:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:01.893 09:41:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:01.893 09:41:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:02.151 09:41:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:02.151 09:41:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:02.151 09:41:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:02.151 09:41:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:02.407 09:41:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:02.407 09:41:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:02.407 09:41:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:02.407 09:41:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:02.664 09:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:02.664 09:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:02.664 09:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:02.664 09:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:02.920 09:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:02.920 09:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:32:02.920 09:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:03.177 09:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:03.435 09:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:32:04.368 09:41:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:32:04.368 09:41:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:04.368 09:41:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:04.368 09:41:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:04.626 09:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:04.626 09:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:04.626 09:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:04.626 09:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:04.884 09:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:04.884 09:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:04.884 09:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:04.884 09:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:05.142 09:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:05.142 09:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:05.142 09:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:05.142 09:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:05.400 09:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:05.400 09:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:05.400 09:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:05.400 09:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:05.658 09:41:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:05.658 09:41:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:05.658 09:41:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:05.658 09:41:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:05.916 09:41:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:05.916 09:41:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:32:05.916 09:41:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:06.174 09:41:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:06.463 09:41:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:32:07.398 09:41:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:32:07.398 09:41:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:07.398 09:41:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:07.398 09:41:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:07.656 09:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:07.656 09:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:07.656 09:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:07.656 09:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:07.913 09:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:07.913 09:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:07.913 09:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:07.913 09:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:08.171 09:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:08.171 09:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:08.171 09:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:08.171 09:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:08.429 09:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:08.429 09:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:08.429 09:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:08.429 09:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:08.687 09:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:08.687 09:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:08.687 09:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:08.687 09:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:08.954 09:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:08.954 09:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:32:08.954 09:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:09.210 09:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:09.467 09:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:32:10.395 09:41:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:32:10.396 09:41:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:10.396 09:41:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:10.396 09:41:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:10.652 09:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:10.652 09:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:10.652 09:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:10.652 09:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:10.910 09:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:10.910 09:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:10.910 09:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:10.910 09:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:11.168 09:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:11.168 09:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:11.168 09:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:11.168 09:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:11.425 09:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:11.425 09:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:11.425 09:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:11.425 09:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:11.682 09:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:11.682 09:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:11.682 09:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:11.682 09:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:11.938 09:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:11.938 09:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:32:12.196 09:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:32:12.196 09:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:12.455 09:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:12.713 09:41:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:32:13.647 09:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:32:13.647 09:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:13.647 09:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:13.647 09:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:13.905 09:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:13.905 09:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:13.905 09:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:13.905 09:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:14.163 09:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:14.163 09:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:14.163 09:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:14.163 09:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:14.421 09:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:14.421 09:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:14.421 09:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:14.421 09:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:14.679 09:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:14.679 09:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:14.679 09:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:14.679 09:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:14.937 09:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:14.937 09:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:14.937 09:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:14.937 09:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:15.195 09:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:15.195 09:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:32:15.195 09:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:15.453 09:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:15.711 09:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:32:17.086 09:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:32:17.086 09:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:17.086 09:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:17.086 09:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:17.086 09:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:17.086 09:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:17.086 09:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:17.086 09:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:17.344 09:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:17.344 09:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:17.344 09:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:17.344 09:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:17.602 09:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:17.602 09:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:17.602 09:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:17.602 09:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:17.860 09:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:17.860 09:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:17.860 09:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:17.860 09:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:18.118 09:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:18.118 09:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:18.118 09:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:18.118 09:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:18.376 09:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:18.376 09:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:32:18.376 09:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:18.635 09:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:32:18.891 09:42:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:32:19.821 09:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:32:19.821 09:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:19.821 09:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:19.821 09:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:20.078 09:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:20.078 09:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:20.078 09:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:20.078 09:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:20.335 09:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:20.335 09:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:20.335 09:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:20.335 09:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:20.592 09:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:20.592 09:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:20.592 09:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:20.592 09:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:20.849 09:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:20.850 09:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:20.850 09:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:20.850 09:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:21.106 09:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:21.106 09:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:21.106 09:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:21.106 09:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:21.364 09:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:21.364 09:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:32:21.364 09:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:21.622 09:42:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:21.881 09:42:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:32:22.866 09:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:32:22.866 09:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:22.866 09:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:22.866 09:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:23.130 09:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:23.130 09:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:23.130 09:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:23.130 09:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:23.389 09:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:23.389 09:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:23.389 09:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:23.389 09:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:23.646 09:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:23.646 09:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:23.646 09:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:23.646 09:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:23.904 09:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:23.904 09:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:23.904 09:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:23.904 09:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:24.469 09:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:24.469 09:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:24.469 09:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:24.469 09:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:24.469 09:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:24.469 09:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 865077 00:32:24.469 09:42:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 865077 ']' 00:32:24.469 09:42:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 865077 00:32:24.469 09:42:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:32:24.469 09:42:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:24.469 09:42:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 865077 00:32:24.469 09:42:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:32:24.469 09:42:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:32:24.469 09:42:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 865077' 00:32:24.469 killing process with pid 865077 00:32:24.469 09:42:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 865077 00:32:24.469 09:42:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 865077 00:32:24.730 Connection closed with partial response: 00:32:24.730 00:32:24.730 00:32:24.730 09:42:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 865077 00:32:24.730 09:42:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:24.730 [2024-07-14 09:41:34.846995] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:32:24.730 [2024-07-14 09:41:34.847081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid865077 ] 00:32:24.730 EAL: No free 2048 kB hugepages reported on node 1 00:32:24.730 [2024-07-14 09:41:34.907840] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:24.730 [2024-07-14 09:41:34.991730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:24.730 Running I/O for 90 seconds... 00:32:24.730 [2024-07-14 09:41:50.577963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:77040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.730 [2024-07-14 09:41:50.578030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:24.730 [2024-07-14 09:41:50.578108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:77048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.730 [2024-07-14 09:41:50.578137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:24.730 [2024-07-14 09:41:50.578198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:77056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.730 [2024-07-14 09:41:50.578224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:24.730 [2024-07-14 09:41:50.578259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:77064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.730 [2024-07-14 09:41:50.578283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:24.730 [2024-07-14 09:41:50.578319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:77072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.730 [2024-07-14 09:41:50.578344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:24.730 [2024-07-14 09:41:50.578378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.730 [2024-07-14 09:41:50.578404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:24.730 [2024-07-14 09:41:50.578451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:77088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.731 [2024-07-14 09:41:50.578476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:24.731 [2024-07-14 09:41:50.578570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.731 [2024-07-14 09:41:50.578600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:24.731 [2024-07-14 09:41:50.578636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:77104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.731 [2024-07-14 09:41:50.578665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:24.731 [2024-07-14 09:41:50.578702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:77112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.731 [2024-07-14 09:41:50.578731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:24.731 [2024-07-14 09:41:50.578767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:77120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.731 [2024-07-14 09:41:50.578821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:24.731 [2024-07-14 09:41:50.578858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:77128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.731 [2024-07-14 09:41:50.578909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:24.731 [2024-07-14 09:41:50.578946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:77136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.731 [2024-07-14 09:41:50.578975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:24.731 [2024-07-14 09:41:50.579012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.731 [2024-07-14 09:41:50.579040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:24.731 [2024-07-14 09:41:50.579078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.731 [2024-07-14 09:41:50.579104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:24.731 [2024-07-14 09:41:50.579142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:77160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.731 [2024-07-14 09:41:50.579168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:24.731 [2024-07-14 09:41:50.579220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:77168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.731 [2024-07-14 09:41:50.579248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:24.731 [2024-07-14 09:41:50.579299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:77176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.731 [2024-07-14 09:41:50.579324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:24.731 [2024-07-14 09:41:50.579358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.731 [2024-07-14 09:41:50.579384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:24.731 [2024-07-14 09:41:50.579418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:77192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.731 [2024-07-14 09:41:50.579445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:24.731 [2024-07-14 09:41:50.579478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:77200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.731 [2024-07-14 09:41:50.579504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:24.731 [2024-07-14 09:41:50.579538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.731 [2024-07-14 09:41:50.579563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:24.731 [2024-07-14 09:41:50.579597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:77216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.731 [2024-07-14 09:41:50.579623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:24.731 [2024-07-14 09:41:50.579664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:77224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.731 [2024-07-14 09:41:50.579689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:24.731 [2024-07-14 09:41:50.580221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:77232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.731 [2024-07-14 09:41:50.580253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:24.731 [2024-07-14 09:41:50.580297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:77240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.731 [2024-07-14 09:41:50.580324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:24.731 [2024-07-14 09:41:50.580363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:77248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.731 [2024-07-14 09:41:50.580390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:24.731 [2024-07-14 09:41:50.580429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:77256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.731 [2024-07-14 09:41:50.580456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:24.731 [2024-07-14 09:41:50.580496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:77264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.731 [2024-07-14 09:41:50.580522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:24.731 [2024-07-14 09:41:50.580562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:77272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.731 [2024-07-14 09:41:50.580590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:24.731 [2024-07-14 09:41:50.580628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:77280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.731 [2024-07-14 09:41:50.580656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:24.731 [2024-07-14 09:41:50.580695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:77288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.731 [2024-07-14 09:41:50.580724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:24.731 [2024-07-14 09:41:50.580762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.731 [2024-07-14 09:41:50.580789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:24.731 [2024-07-14 09:41:50.580828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.731 [2024-07-14 09:41:50.580861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:24.731 [2024-07-14 09:41:50.580913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:77312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.731 [2024-07-14 09:41:50.580940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:24.731 [2024-07-14 09:41:50.580988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:77320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.731 [2024-07-14 09:41:50.581016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:24.731 [2024-07-14 09:41:50.581055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:77328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.731 [2024-07-14 09:41:50.581083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:24.731 [2024-07-14 09:41:50.581122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:77336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.731 [2024-07-14 09:41:50.581156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:24.731 [2024-07-14 09:41:50.581195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:77344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.731 [2024-07-14 09:41:50.581225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:24.732 [2024-07-14 09:41:50.581266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:77352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.732 [2024-07-14 09:41:50.581294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:24.732 [2024-07-14 09:41:50.581333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:77360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.732 [2024-07-14 09:41:50.581361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:24.732 [2024-07-14 09:41:50.581399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:77368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.732 [2024-07-14 09:41:50.581427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:24.732 [2024-07-14 09:41:50.581466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.732 [2024-07-14 09:41:50.581495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:24.732 [2024-07-14 09:41:50.581533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:77384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.732 [2024-07-14 09:41:50.581560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:24.732 [2024-07-14 09:41:50.581600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:77392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.732 [2024-07-14 09:41:50.581628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:24.732 [2024-07-14 09:41:50.581668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:77400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.732 [2024-07-14 09:41:50.581710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:24.732 [2024-07-14 09:41:50.581747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:77408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.732 [2024-07-14 09:41:50.581787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:24.732 [2024-07-14 09:41:50.581823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:77416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.732 [2024-07-14 09:41:50.581877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:24.732 [2024-07-14 09:41:50.581919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:77424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.732 [2024-07-14 09:41:50.581946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:24.732 [2024-07-14 09:41:50.581984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:77432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.732 [2024-07-14 09:41:50.582010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:24.732 [2024-07-14 09:41:50.582048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:77440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.732 [2024-07-14 09:41:50.582074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:24.732 [2024-07-14 09:41:50.582112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:77448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.732 [2024-07-14 09:41:50.582138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:24.732 [2024-07-14 09:41:50.582200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:77456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.732 [2024-07-14 09:41:50.582226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:24.732 [2024-07-14 09:41:50.582264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.732 [2024-07-14 09:41:50.582290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:24.732 [2024-07-14 09:41:50.582326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:77472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.732 [2024-07-14 09:41:50.582353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:24.732 [2024-07-14 09:41:50.582389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:77480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.732 [2024-07-14 09:41:50.582415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:24.732 [2024-07-14 09:41:50.582452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:77488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.732 [2024-07-14 09:41:50.582478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:24.732 [2024-07-14 09:41:50.582516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:77496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.732 [2024-07-14 09:41:50.582541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:24.732 [2024-07-14 09:41:50.582580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:77504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.732 [2024-07-14 09:41:50.582606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:24.732 [2024-07-14 09:41:50.582642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.732 [2024-07-14 09:41:50.582674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:24.732 [2024-07-14 09:41:50.582711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:77520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.732 [2024-07-14 09:41:50.582737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:24.732 [2024-07-14 09:41:50.582775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:77528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.732 [2024-07-14 09:41:50.582801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:24.732 [2024-07-14 09:41:50.582840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:77536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.732 [2024-07-14 09:41:50.582888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:24.732 [2024-07-14 09:41:50.582927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:77544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.732 [2024-07-14 09:41:50.582956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:24.732 [2024-07-14 09:41:50.582994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:77552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.732 [2024-07-14 09:41:50.583021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:24.732 [2024-07-14 09:41:50.583058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:77560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.732 [2024-07-14 09:41:50.583084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:24.732 [2024-07-14 09:41:50.583123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:77568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.732 [2024-07-14 09:41:50.583164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:24.732 [2024-07-14 09:41:50.583200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.732 [2024-07-14 09:41:50.583226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:24.732 [2024-07-14 09:41:50.583264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:77584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.732 [2024-07-14 09:41:50.583290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:24.732 [2024-07-14 09:41:50.583326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:77592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.732 [2024-07-14 09:41:50.583352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:24.732 [2024-07-14 09:41:50.583388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:77600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.732 [2024-07-14 09:41:50.583413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:24.732 [2024-07-14 09:41:50.583450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:77608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.733 [2024-07-14 09:41:50.583476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:24.733 [2024-07-14 09:41:50.583519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:77616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.733 [2024-07-14 09:41:50.583546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:24.733 [2024-07-14 09:41:50.583583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:77624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.733 [2024-07-14 09:41:50.583610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:24.733 [2024-07-14 09:41:50.583647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:77632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.733 [2024-07-14 09:41:50.583672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:24.733 [2024-07-14 09:41:50.583718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:77640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.733 [2024-07-14 09:41:50.583743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:24.733 [2024-07-14 09:41:50.583781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.733 [2024-07-14 09:41:50.583807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:24.733 [2024-07-14 09:41:50.583844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:77656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.733 [2024-07-14 09:41:50.583892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:24.733 [2024-07-14 09:41:50.583932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:77664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.733 [2024-07-14 09:41:50.583959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:24.733 [2024-07-14 09:41:50.583998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:77672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.733 [2024-07-14 09:41:50.584024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:24.733 [2024-07-14 09:41:50.584063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.733 [2024-07-14 09:41:50.584090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:24.733 [2024-07-14 09:41:50.584128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:76984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.733 [2024-07-14 09:41:50.584156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:24.733 [2024-07-14 09:41:50.584207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:76992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.733 [2024-07-14 09:41:50.584234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:24.733 [2024-07-14 09:41:50.584270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:77000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.733 [2024-07-14 09:41:50.584296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:24.733 [2024-07-14 09:41:50.584339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:77008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.733 [2024-07-14 09:41:50.584366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:24.733 [2024-07-14 09:41:50.584402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:77016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.733 [2024-07-14 09:41:50.584429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:24.733 [2024-07-14 09:41:50.584466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:77024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.733 [2024-07-14 09:41:50.584493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:24.733 [2024-07-14 09:41:50.584835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:77032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.733 [2024-07-14 09:41:50.584883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:24.733 [2024-07-14 09:41:50.584936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:77688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.733 [2024-07-14 09:41:50.584966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:24.733 [2024-07-14 09:41:50.585011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.733 [2024-07-14 09:41:50.585039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:24.733 [2024-07-14 09:41:50.585083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:77704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.733 [2024-07-14 09:41:50.585110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:24.733 [2024-07-14 09:41:50.585157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.733 [2024-07-14 09:41:50.585199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:24.733 [2024-07-14 09:41:50.585240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.733 [2024-07-14 09:41:50.585265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:24.733 [2024-07-14 09:41:50.585306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:77728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.733 [2024-07-14 09:41:50.585332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:24.733 [2024-07-14 09:41:50.585374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:77736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.733 [2024-07-14 09:41:50.585400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:24.733 [2024-07-14 09:41:50.585443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:77744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.733 [2024-07-14 09:41:50.585469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.733 [2024-07-14 09:41:50.585509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.733 [2024-07-14 09:41:50.585542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.733 [2024-07-14 09:41:50.585584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.733 [2024-07-14 09:41:50.585610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:24.733 [2024-07-14 09:41:50.585652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:77768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.733 [2024-07-14 09:41:50.585678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:24.733 [2024-07-14 09:41:50.585720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:77776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.733 [2024-07-14 09:41:50.585746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:24.733 [2024-07-14 09:41:50.585787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:77784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.733 [2024-07-14 09:41:50.585813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:24.733 [2024-07-14 09:41:50.585889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:77792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.733 [2024-07-14 09:41:50.585917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:24.733 [2024-07-14 09:41:50.585962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:77800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.733 [2024-07-14 09:41:50.585989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:24.733 [2024-07-14 09:41:50.586030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.733 [2024-07-14 09:41:50.586056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:24.733 [2024-07-14 09:41:50.586100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.733 [2024-07-14 09:41:50.586127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:24.734 [2024-07-14 09:41:50.586185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:77824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.734 [2024-07-14 09:41:50.586211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:24.734 [2024-07-14 09:41:50.586253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.734 [2024-07-14 09:41:50.586279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:24.734 [2024-07-14 09:41:50.586320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:77840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.734 [2024-07-14 09:41:50.586345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:24.734 [2024-07-14 09:41:50.586387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:77848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.734 [2024-07-14 09:41:50.586418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:24.734 [2024-07-14 09:41:50.586461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:77856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.734 [2024-07-14 09:41:50.586488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:24.734 [2024-07-14 09:41:50.586529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:77864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.734 [2024-07-14 09:41:50.586554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:24.734 [2024-07-14 09:41:50.586596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:77872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.734 [2024-07-14 09:41:50.586622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:24.734 [2024-07-14 09:41:50.586672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.734 [2024-07-14 09:41:50.586699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:24.734 [2024-07-14 09:41:50.586741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:77888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.734 [2024-07-14 09:41:50.586767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:24.734 [2024-07-14 09:41:50.586807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:77896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.734 [2024-07-14 09:41:50.586833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:24.734 [2024-07-14 09:41:50.586899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.734 [2024-07-14 09:41:50.586927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:24.734 [2024-07-14 09:41:50.586972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:77912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.734 [2024-07-14 09:41:50.586999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:24.734 [2024-07-14 09:41:50.587042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:77920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.734 [2024-07-14 09:41:50.587069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:24.734 [2024-07-14 09:41:50.587111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:77928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.734 [2024-07-14 09:41:50.587137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:24.734 [2024-07-14 09:41:50.587203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:77936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.734 [2024-07-14 09:41:50.587229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:24.734 [2024-07-14 09:41:50.587270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:77944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.734 [2024-07-14 09:41:50.587295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:24.734 [2024-07-14 09:41:50.587343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.734 [2024-07-14 09:41:50.587369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:24.734 [2024-07-14 09:41:50.587412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:77960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.734 [2024-07-14 09:41:50.587438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:24.734 [2024-07-14 09:41:50.587479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:77968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.734 [2024-07-14 09:41:50.587506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:24.734 [2024-07-14 09:41:50.587546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:77976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.734 [2024-07-14 09:41:50.587571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:24.734 [2024-07-14 09:41:50.587615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:77984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.734 [2024-07-14 09:41:50.587640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:24.734 [2024-07-14 09:41:50.587683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:77992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.734 [2024-07-14 09:41:50.587710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:24.734 [2024-07-14 09:42:06.253217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:58776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.734 [2024-07-14 09:42:06.253319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:24.734 [2024-07-14 09:42:06.253397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:58792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.734 [2024-07-14 09:42:06.253427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:24.734 [2024-07-14 09:42:06.253465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:58808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.734 [2024-07-14 09:42:06.253491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:24.734 [2024-07-14 09:42:06.253542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:58824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.734 [2024-07-14 09:42:06.253568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:24.734 [2024-07-14 09:42:06.253602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:58840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.734 [2024-07-14 09:42:06.253627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:24.734 [2024-07-14 09:42:06.253662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.734 [2024-07-14 09:42:06.253688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:24.734 [2024-07-14 09:42:06.253737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:58872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.734 [2024-07-14 09:42:06.253764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:24.734 [2024-07-14 09:42:06.253798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:58888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.734 [2024-07-14 09:42:06.253822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:24.734 [2024-07-14 09:42:06.253879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:58904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.734 [2024-07-14 09:42:06.253920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:24.734 [2024-07-14 09:42:06.253954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:58920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.734 [2024-07-14 09:42:06.253980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:24.735 [2024-07-14 09:42:06.254018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:58936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.735 [2024-07-14 09:42:06.254044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:24.735 [2024-07-14 09:42:06.254081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:58952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.735 [2024-07-14 09:42:06.254108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:24.735 [2024-07-14 09:42:06.254145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:58968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.735 [2024-07-14 09:42:06.254172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:24.735 [2024-07-14 09:42:06.254222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:58984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.735 [2024-07-14 09:42:06.254261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:24.735 [2024-07-14 09:42:06.254299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:59000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.735 [2024-07-14 09:42:06.254327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:24.735 [2024-07-14 09:42:06.254362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:58624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.735 [2024-07-14 09:42:06.254387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:24.735 [2024-07-14 09:42:06.254422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:59016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.735 [2024-07-14 09:42:06.254448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:24.735 [2024-07-14 09:42:06.254483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:59032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.735 [2024-07-14 09:42:06.254513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:24.735 [2024-07-14 09:42:06.254551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:59048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.735 [2024-07-14 09:42:06.254582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:24.735 [2024-07-14 09:42:06.254620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:59064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.735 [2024-07-14 09:42:06.254646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:24.735 [2024-07-14 09:42:06.254682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:59080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.735 [2024-07-14 09:42:06.254707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:24.735 [2024-07-14 09:42:06.254742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:59096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.735 [2024-07-14 09:42:06.254768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:24.735 [2024-07-14 09:42:06.254815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:59112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.735 [2024-07-14 09:42:06.254842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:24.735 [2024-07-14 09:42:06.254900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:59128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.735 [2024-07-14 09:42:06.254927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:24.735 [2024-07-14 09:42:06.254961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:59144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.735 [2024-07-14 09:42:06.254987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:24.735 [2024-07-14 09:42:06.255023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:59160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.735 [2024-07-14 09:42:06.255049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:24.735 [2024-07-14 09:42:06.255085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:59176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.735 [2024-07-14 09:42:06.255110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:24.735 [2024-07-14 09:42:06.255150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:59192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.735 [2024-07-14 09:42:06.255176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:24.735 [2024-07-14 09:42:06.256885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:59208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.735 [2024-07-14 09:42:06.256921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:24.735 [2024-07-14 09:42:06.256963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:59224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.735 [2024-07-14 09:42:06.256991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:24.735 [2024-07-14 09:42:06.257029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.735 [2024-07-14 09:42:06.257062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:24.735 [2024-07-14 09:42:06.257103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:59256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.735 [2024-07-14 09:42:06.257130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:24.735 [2024-07-14 09:42:06.257183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:59272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.735 [2024-07-14 09:42:06.257209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:24.735 [2024-07-14 09:42:06.257259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:59288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.735 [2024-07-14 09:42:06.257285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:24.735 [2024-07-14 09:42:06.257319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:59304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.735 [2024-07-14 09:42:06.257345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:24.735 [2024-07-14 09:42:06.257378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:59320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.735 [2024-07-14 09:42:06.257404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:24.735 [2024-07-14 09:42:06.257440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:59336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.735 [2024-07-14 09:42:06.257465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:24.735 [2024-07-14 09:42:06.257498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:59352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.735 [2024-07-14 09:42:06.257523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:24.735 [2024-07-14 09:42:06.257558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:59368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.735 [2024-07-14 09:42:06.257583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:24.736 [2024-07-14 09:42:06.257633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:59384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.736 [2024-07-14 09:42:06.257659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:24.736 [2024-07-14 09:42:06.257694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:59400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.736 [2024-07-14 09:42:06.257720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:24.736 [2024-07-14 09:42:06.257771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:59416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.736 [2024-07-14 09:42:06.257798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:24.736 [2024-07-14 09:42:06.257834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:59432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.736 [2024-07-14 09:42:06.257861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:24.736 [2024-07-14 09:42:06.257929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:59440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.736 [2024-07-14 09:42:06.257956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:24.736 [2024-07-14 09:42:06.257994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:59456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.736 [2024-07-14 09:42:06.258021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:24.736 [2024-07-14 09:42:06.258058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:59472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.736 [2024-07-14 09:42:06.258084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:24.736 [2024-07-14 09:42:06.258121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:59488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.736 [2024-07-14 09:42:06.258148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:24.736 [2024-07-14 09:42:06.258184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:59504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.736 [2024-07-14 09:42:06.258212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:24.736 [2024-07-14 09:42:06.258247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:59520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.736 [2024-07-14 09:42:06.258287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:24.736 [2024-07-14 09:42:06.258322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:59536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.736 [2024-07-14 09:42:06.258349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:24.736 [2024-07-14 09:42:06.258383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:59552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.736 [2024-07-14 09:42:06.258409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:24.736 [2024-07-14 09:42:06.258458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:59568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.736 [2024-07-14 09:42:06.258483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:24.736 [2024-07-14 09:42:06.258518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:59584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.736 [2024-07-14 09:42:06.258543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:24.736 [2024-07-14 09:42:06.258579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:59600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.736 [2024-07-14 09:42:06.258605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:24.736 [2024-07-14 09:42:06.258654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:59616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.736 [2024-07-14 09:42:06.258679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:24.736 [2024-07-14 09:42:06.258720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:58664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.736 [2024-07-14 09:42:06.258746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:24.736 [2024-07-14 09:42:06.258797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:58696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.736 [2024-07-14 09:42:06.258824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:24.736 [2024-07-14 09:42:06.258861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:58736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.736 [2024-07-14 09:42:06.258899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:24.736 [2024-07-14 09:42:06.258938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:59624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.736 [2024-07-14 09:42:06.258965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:24.736 [2024-07-14 09:42:06.259003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.736 [2024-07-14 09:42:06.259030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:24.736 [2024-07-14 09:42:06.259067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:58688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.736 [2024-07-14 09:42:06.259094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:24.736 [2024-07-14 09:42:06.259131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:58712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.736 [2024-07-14 09:42:06.259159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:24.736 [2024-07-14 09:42:06.259209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:58744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.736 [2024-07-14 09:42:06.259236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:24.736 [2024-07-14 09:42:06.259270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:59640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.736 [2024-07-14 09:42:06.259297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:24.736 Received shutdown signal, test time was about 32.525737 seconds 00:32:24.736 00:32:24.736 Latency(us) 00:32:24.736 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:24.736 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:32:24.736 Verification LBA range: start 0x0 length 0x4000 00:32:24.736 Nvme0n1 : 32.52 7982.50 31.18 0.00 0.00 16006.40 238.17 4026531.84 00:32:24.736 =================================================================================================================== 00:32:24.736 Total : 7982.50 31.18 0.00 0.00 16006.40 238.17 4026531.84 00:32:24.736 09:42:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:24.994 09:42:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:32:24.995 09:42:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:24.995 09:42:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:32:24.995 09:42:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:24.995 09:42:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:32:24.995 09:42:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:24.995 09:42:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:32:24.995 09:42:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:24.995 09:42:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:24.995 rmmod nvme_tcp 00:32:24.995 rmmod nvme_fabrics 00:32:24.995 rmmod nvme_keyring 00:32:25.252 09:42:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:25.252 09:42:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:32:25.252 09:42:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:32:25.252 09:42:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 864799 ']' 00:32:25.252 09:42:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 864799 00:32:25.252 09:42:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 864799 ']' 00:32:25.252 09:42:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 864799 00:32:25.252 09:42:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:32:25.252 09:42:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:25.252 09:42:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 864799 00:32:25.252 09:42:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:25.252 09:42:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:25.252 09:42:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 864799' 00:32:25.252 killing process with pid 864799 00:32:25.252 09:42:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 864799 00:32:25.252 09:42:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 864799 00:32:25.511 09:42:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:25.511 09:42:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:25.511 09:42:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:25.511 09:42:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:25.511 09:42:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:25.511 09:42:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:25.511 09:42:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:25.511 09:42:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:27.416 09:42:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:27.416 00:32:27.416 real 0m40.965s 00:32:27.416 user 2m2.483s 00:32:27.416 sys 0m11.136s 00:32:27.416 09:42:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:27.416 09:42:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:27.416 ************************************ 00:32:27.416 END TEST nvmf_host_multipath_status 00:32:27.416 ************************************ 00:32:27.416 09:42:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:27.416 09:42:11 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:27.416 09:42:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:27.416 09:42:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:27.416 09:42:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:27.416 ************************************ 00:32:27.416 START TEST nvmf_discovery_remove_ifc 00:32:27.416 ************************************ 00:32:27.416 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:27.416 * Looking for test storage... 00:32:27.416 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:27.416 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:27.416 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:32:27.416 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:27.416 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:27.416 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:27.416 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:27.416 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:27.416 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:27.416 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:27.416 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:27.416 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:27.416 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:27.416 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:27.416 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:27.416 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:27.416 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:27.416 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:27.416 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:27.416 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:27.416 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:27.416 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:27.416 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:27.416 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.416 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.416 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.416 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:32:27.416 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.416 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:32:27.416 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:27.416 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:27.416 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:27.416 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:27.416 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:27.416 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:27.416 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:27.416 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:27.702 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:32:27.702 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:32:27.702 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:32:27.702 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:32:27.702 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:32:27.702 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:32:27.702 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:32:27.702 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:27.702 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:27.702 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:27.702 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:27.702 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:27.702 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:27.702 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:27.702 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:27.702 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:27.702 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:27.702 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:32:27.702 09:42:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:29.602 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:29.602 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:32:29.602 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:29.602 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:29.602 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:29.602 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:29.602 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:29.602 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:32:29.602 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:29.602 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:32:29.602 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:32:29.602 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:32:29.602 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:32:29.602 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:32:29.602 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:32:29.602 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:29.602 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:29.602 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:29.602 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:29.602 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:29.602 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:29.602 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:29.602 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:29.602 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:29.602 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:29.602 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:29.602 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:29.602 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:29.602 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:29.602 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:29.602 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:29.602 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:29.602 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:29.602 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:29.602 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:29.602 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:29.602 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:29.602 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:29.602 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:29.602 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:29.602 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:29.603 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:29.603 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:29.603 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:29.603 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:29.603 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:29.603 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:29.603 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:29.603 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:29.603 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:29.603 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:29.603 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:29.603 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:29.603 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:29.603 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:29.603 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:29.603 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:29.603 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:29.603 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:29.603 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:29.603 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:29.603 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:29.603 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:29.603 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:29.603 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:29.603 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:29.603 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:29.603 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:29.603 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:29.603 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:29.603 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:29.603 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:29.603 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:32:29.603 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:29.603 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:29.603 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:29.603 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:29.603 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:29.603 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:29.603 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:29.603 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:29.603 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:29.603 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:29.603 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:29.603 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:29.603 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:29.603 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:29.603 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:29.603 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:29.603 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:29.603 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:29.603 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:29.603 09:42:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:29.603 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:29.603 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:29.603 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:29.603 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:29.603 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:32:29.603 00:32:29.603 --- 10.0.0.2 ping statistics --- 00:32:29.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:29.603 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:32:29.603 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:29.603 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:29.603 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:32:29.603 00:32:29.603 --- 10.0.0.1 ping statistics --- 00:32:29.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:29.603 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:32:29.603 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:29.603 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:32:29.603 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:29.603 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:29.603 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:29.603 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:29.603 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:29.603 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:29.603 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:29.603 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:32:29.603 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:29.603 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:29.603 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:29.603 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=871887 00:32:29.603 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 871887 00:32:29.603 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:29.603 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 871887 ']' 00:32:29.603 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:29.603 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:29.603 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:29.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:29.603 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:29.603 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:29.861 [2024-07-14 09:42:14.100284] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:32:29.861 [2024-07-14 09:42:14.100366] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:29.861 EAL: No free 2048 kB hugepages reported on node 1 00:32:29.861 [2024-07-14 09:42:14.167693] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:29.861 [2024-07-14 09:42:14.256330] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:29.861 [2024-07-14 09:42:14.256395] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:29.861 [2024-07-14 09:42:14.256411] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:29.861 [2024-07-14 09:42:14.256425] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:29.861 [2024-07-14 09:42:14.256437] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:29.861 [2024-07-14 09:42:14.256466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:30.119 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:30.119 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:32:30.119 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:30.119 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:30.119 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:30.119 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:30.119 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:32:30.119 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.119 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:30.119 [2024-07-14 09:42:14.412983] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:30.119 [2024-07-14 09:42:14.421159] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:30.119 null0 00:32:30.119 [2024-07-14 09:42:14.453069] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:30.119 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.119 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=871915 00:32:30.119 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:32:30.119 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 871915 /tmp/host.sock 00:32:30.119 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 871915 ']' 00:32:30.119 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:32:30.119 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:30.119 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:30.119 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:30.119 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:30.119 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:30.119 [2024-07-14 09:42:14.518227] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:32:30.119 [2024-07-14 09:42:14.518305] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid871915 ] 00:32:30.119 EAL: No free 2048 kB hugepages reported on node 1 00:32:30.377 [2024-07-14 09:42:14.578885] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:30.377 [2024-07-14 09:42:14.662552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:30.377 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:30.377 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:32:30.377 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:30.377 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:32:30.377 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.377 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:30.377 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.377 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:32:30.377 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.377 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:30.635 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.635 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:32:30.635 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.635 09:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:31.566 [2024-07-14 09:42:15.931966] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:31.566 [2024-07-14 09:42:15.931991] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:31.566 [2024-07-14 09:42:15.932013] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:31.823 [2024-07-14 09:42:16.059468] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:32.080 [2024-07-14 09:42:16.283825] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:32.080 [2024-07-14 09:42:16.283919] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:32.080 [2024-07-14 09:42:16.283973] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:32.080 [2024-07-14 09:42:16.283997] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:32.081 [2024-07-14 09:42:16.284021] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:32.081 09:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.081 09:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:32:32.081 09:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:32.081 09:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:32.081 09:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:32.081 09:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.081 09:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:32.081 09:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:32.081 09:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:32.081 [2024-07-14 09:42:16.290169] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x14f7300 was disconnected and freed. delete nvme_qpair. 00:32:32.081 09:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.081 09:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:32:32.081 09:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:32:32.081 09:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:32:32.081 09:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:32:32.081 09:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:32.081 09:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:32.081 09:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:32.081 09:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.081 09:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:32.081 09:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:32.081 09:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:32.081 09:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.081 09:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:32.081 09:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:33.011 09:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:33.011 09:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:33.011 09:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:33.011 09:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.011 09:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:33.011 09:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:33.011 09:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:33.011 09:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.268 09:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:33.268 09:42:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:34.201 09:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:34.201 09:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:34.201 09:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.201 09:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:34.201 09:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:34.201 09:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:34.201 09:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:34.201 09:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.201 09:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:34.201 09:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:35.134 09:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:35.134 09:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:35.134 09:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:35.134 09:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.134 09:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:35.134 09:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:35.134 09:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:35.134 09:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.134 09:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:35.134 09:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:36.508 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:36.508 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:36.508 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:36.508 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.508 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:36.508 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:36.508 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:36.508 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.508 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:36.508 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:37.470 09:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:37.470 09:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:37.470 09:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:37.470 09:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.470 09:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:37.470 09:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:37.470 09:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:37.470 09:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.470 09:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:37.470 09:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:37.470 [2024-07-14 09:42:21.724913] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:32:37.470 [2024-07-14 09:42:21.724973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:37.470 [2024-07-14 09:42:21.724993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.470 [2024-07-14 09:42:21.725018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:37.470 [2024-07-14 09:42:21.725032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.470 [2024-07-14 09:42:21.725046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:37.470 [2024-07-14 09:42:21.725059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.470 [2024-07-14 09:42:21.725072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:37.470 [2024-07-14 09:42:21.725085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.470 [2024-07-14 09:42:21.725100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:37.470 [2024-07-14 09:42:21.725113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.470 [2024-07-14 09:42:21.725129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bdb40 is same with the state(5) to be set 00:32:37.470 [2024-07-14 09:42:21.734920] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14bdb40 (9): Bad file descriptor 00:32:37.470 [2024-07-14 09:42:21.744964] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:38.404 09:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:38.404 09:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:38.404 09:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:38.404 09:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.404 09:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:38.404 09:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:38.404 09:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:38.404 [2024-07-14 09:42:22.749939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:32:38.404 [2024-07-14 09:42:22.750007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14bdb40 with addr=10.0.0.2, port=4420 00:32:38.404 [2024-07-14 09:42:22.750034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bdb40 is same with the state(5) to be set 00:32:38.404 [2024-07-14 09:42:22.750082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14bdb40 (9): Bad file descriptor 00:32:38.404 [2024-07-14 09:42:22.750558] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:38.404 [2024-07-14 09:42:22.750594] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:38.404 [2024-07-14 09:42:22.750613] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:38.404 [2024-07-14 09:42:22.750630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:38.404 [2024-07-14 09:42:22.750664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:38.404 [2024-07-14 09:42:22.750684] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:38.404 09:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.404 09:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:38.404 09:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:39.345 [2024-07-14 09:42:23.753207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:39.345 [2024-07-14 09:42:23.753274] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:39.345 [2024-07-14 09:42:23.753288] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:39.345 [2024-07-14 09:42:23.753302] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:32:39.345 [2024-07-14 09:42:23.753345] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:39.345 [2024-07-14 09:42:23.753385] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:32:39.345 [2024-07-14 09:42:23.753448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:39.345 [2024-07-14 09:42:23.753470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.345 [2024-07-14 09:42:23.753489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:39.345 [2024-07-14 09:42:23.753502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.345 [2024-07-14 09:42:23.753516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:39.345 [2024-07-14 09:42:23.753529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.345 [2024-07-14 09:42:23.753543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:39.345 [2024-07-14 09:42:23.753556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.345 [2024-07-14 09:42:23.753570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:39.345 [2024-07-14 09:42:23.753583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:39.345 [2024-07-14 09:42:23.753596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:32:39.345 [2024-07-14 09:42:23.753810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14bcf80 (9): Bad file descriptor 00:32:39.345 [2024-07-14 09:42:23.754825] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:32:39.345 [2024-07-14 09:42:23.754861] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:32:39.345 09:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:39.345 09:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:39.345 09:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:39.345 09:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.345 09:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:39.345 09:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:39.345 09:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:39.345 09:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.602 09:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:32:39.602 09:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:39.603 09:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:39.603 09:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:32:39.603 09:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:39.603 09:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:39.603 09:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.603 09:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:39.603 09:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:39.603 09:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:39.603 09:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:39.603 09:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.603 09:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:39.603 09:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:40.534 09:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:40.534 09:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:40.534 09:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:40.534 09:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.534 09:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:40.534 09:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:40.534 09:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:40.534 09:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.534 09:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:40.534 09:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:41.468 [2024-07-14 09:42:25.808843] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:41.468 [2024-07-14 09:42:25.808882] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:41.468 [2024-07-14 09:42:25.808904] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:41.724 [2024-07-14 09:42:25.936352] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:32:41.724 09:42:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:41.724 09:42:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:41.724 09:42:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:41.724 09:42:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.724 09:42:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:41.725 09:42:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:41.725 09:42:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:41.725 09:42:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.725 09:42:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:41.725 09:42:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:41.725 [2024-07-14 09:42:26.160733] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:41.725 [2024-07-14 09:42:26.160791] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:41.725 [2024-07-14 09:42:26.160826] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:41.725 [2024-07-14 09:42:26.160854] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:32:41.725 [2024-07-14 09:42:26.160887] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:41.725 [2024-07-14 09:42:26.166450] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x14d43f0 was disconnected and freed. delete nvme_qpair. 00:32:42.655 09:42:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:42.655 09:42:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:42.655 09:42:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:42.655 09:42:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.655 09:42:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:42.655 09:42:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:42.655 09:42:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:42.655 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.655 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:32:42.655 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:32:42.655 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 871915 00:32:42.655 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 871915 ']' 00:32:42.655 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 871915 00:32:42.655 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:32:42.655 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:42.655 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 871915 00:32:42.655 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:42.655 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:42.655 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 871915' 00:32:42.655 killing process with pid 871915 00:32:42.655 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 871915 00:32:42.655 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 871915 00:32:42.937 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:32:42.937 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:42.937 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:32:42.937 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:42.937 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:32:42.937 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:42.937 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:42.937 rmmod nvme_tcp 00:32:42.937 rmmod nvme_fabrics 00:32:42.937 rmmod nvme_keyring 00:32:42.937 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:42.937 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:32:42.937 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:32:42.937 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 871887 ']' 00:32:42.937 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 871887 00:32:42.937 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 871887 ']' 00:32:42.937 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 871887 00:32:42.937 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:32:42.937 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:42.937 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 871887 00:32:43.195 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:43.195 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:43.195 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 871887' 00:32:43.195 killing process with pid 871887 00:32:43.195 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 871887 00:32:43.195 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 871887 00:32:43.195 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:43.195 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:43.195 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:43.195 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:43.195 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:43.195 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:43.195 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:43.195 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:45.725 09:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:45.725 00:32:45.725 real 0m17.867s 00:32:45.725 user 0m26.022s 00:32:45.725 sys 0m3.042s 00:32:45.725 09:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:45.725 09:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:45.725 ************************************ 00:32:45.725 END TEST nvmf_discovery_remove_ifc 00:32:45.725 ************************************ 00:32:45.725 09:42:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:45.725 09:42:29 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:45.725 09:42:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:45.725 09:42:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:45.725 09:42:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:45.725 ************************************ 00:32:45.725 START TEST nvmf_identify_kernel_target 00:32:45.725 ************************************ 00:32:45.725 09:42:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:45.725 * Looking for test storage... 00:32:45.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:45.725 09:42:29 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:45.725 09:42:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:32:45.725 09:42:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:45.725 09:42:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:45.725 09:42:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:45.725 09:42:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:45.725 09:42:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:45.725 09:42:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:45.725 09:42:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:45.725 09:42:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:45.725 09:42:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:45.725 09:42:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:45.725 09:42:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:45.725 09:42:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:45.725 09:42:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:45.725 09:42:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:45.725 09:42:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:45.725 09:42:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:45.725 09:42:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:45.725 09:42:29 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:45.726 09:42:29 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:45.726 09:42:29 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:45.726 09:42:29 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.726 09:42:29 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.726 09:42:29 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.726 09:42:29 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:32:45.726 09:42:29 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.726 09:42:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:32:45.726 09:42:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:45.726 09:42:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:45.726 09:42:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:45.726 09:42:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:45.726 09:42:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:45.726 09:42:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:45.726 09:42:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:45.726 09:42:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:45.726 09:42:29 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:32:45.726 09:42:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:45.726 09:42:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:45.726 09:42:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:45.726 09:42:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:45.726 09:42:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:45.726 09:42:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:45.726 09:42:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:45.726 09:42:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:45.726 09:42:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:45.726 09:42:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:45.726 09:42:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:32:45.726 09:42:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:47.624 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:47.624 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:47.624 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:47.624 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:47.625 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:47.625 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:47.625 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:32:47.625 00:32:47.625 --- 10.0.0.2 ping statistics --- 00:32:47.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:47.625 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:47.625 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:47.625 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:32:47.625 00:32:47.625 --- 10.0.0.1 ping statistics --- 00:32:47.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:47.625 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:47.625 09:42:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:48.998 Waiting for block devices as requested 00:32:48.998 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:48.998 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:48.998 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:48.998 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:49.256 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:49.256 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:49.256 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:49.256 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:49.515 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:49.515 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:49.515 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:49.772 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:49.772 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:49.772 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:49.773 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:50.030 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:50.030 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:50.030 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:50.030 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:50.030 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:50.030 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:32:50.030 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:50.030 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:32:50.030 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:50.030 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:50.030 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:50.288 No valid GPT data, bailing 00:32:50.288 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:50.288 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:32:50.288 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:32:50.288 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:50.288 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:50.288 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:50.288 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:50.288 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:50.288 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:50.288 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:32:50.288 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:50.288 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:32:50.288 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:50.288 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:32:50.288 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:32:50.288 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:32:50.288 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:50.288 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:50.288 00:32:50.288 Discovery Log Number of Records 2, Generation counter 2 00:32:50.288 =====Discovery Log Entry 0====== 00:32:50.288 trtype: tcp 00:32:50.288 adrfam: ipv4 00:32:50.288 subtype: current discovery subsystem 00:32:50.288 treq: not specified, sq flow control disable supported 00:32:50.288 portid: 1 00:32:50.288 trsvcid: 4420 00:32:50.288 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:50.288 traddr: 10.0.0.1 00:32:50.288 eflags: none 00:32:50.288 sectype: none 00:32:50.288 =====Discovery Log Entry 1====== 00:32:50.288 trtype: tcp 00:32:50.288 adrfam: ipv4 00:32:50.288 subtype: nvme subsystem 00:32:50.288 treq: not specified, sq flow control disable supported 00:32:50.288 portid: 1 00:32:50.288 trsvcid: 4420 00:32:50.288 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:50.288 traddr: 10.0.0.1 00:32:50.288 eflags: none 00:32:50.288 sectype: none 00:32:50.288 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:32:50.288 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:32:50.288 EAL: No free 2048 kB hugepages reported on node 1 00:32:50.288 ===================================================== 00:32:50.288 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:32:50.288 ===================================================== 00:32:50.288 Controller Capabilities/Features 00:32:50.288 ================================ 00:32:50.288 Vendor ID: 0000 00:32:50.288 Subsystem Vendor ID: 0000 00:32:50.288 Serial Number: 1ec2697bacb290eda7d0 00:32:50.288 Model Number: Linux 00:32:50.288 Firmware Version: 6.7.0-68 00:32:50.288 Recommended Arb Burst: 0 00:32:50.288 IEEE OUI Identifier: 00 00 00 00:32:50.288 Multi-path I/O 00:32:50.288 May have multiple subsystem ports: No 00:32:50.288 May have multiple controllers: No 00:32:50.288 Associated with SR-IOV VF: No 00:32:50.288 Max Data Transfer Size: Unlimited 00:32:50.288 Max Number of Namespaces: 0 00:32:50.288 Max Number of I/O Queues: 1024 00:32:50.288 NVMe Specification Version (VS): 1.3 00:32:50.288 NVMe Specification Version (Identify): 1.3 00:32:50.288 Maximum Queue Entries: 1024 00:32:50.288 Contiguous Queues Required: No 00:32:50.288 Arbitration Mechanisms Supported 00:32:50.288 Weighted Round Robin: Not Supported 00:32:50.288 Vendor Specific: Not Supported 00:32:50.288 Reset Timeout: 7500 ms 00:32:50.288 Doorbell Stride: 4 bytes 00:32:50.288 NVM Subsystem Reset: Not Supported 00:32:50.288 Command Sets Supported 00:32:50.288 NVM Command Set: Supported 00:32:50.288 Boot Partition: Not Supported 00:32:50.288 Memory Page Size Minimum: 4096 bytes 00:32:50.288 Memory Page Size Maximum: 4096 bytes 00:32:50.288 Persistent Memory Region: Not Supported 00:32:50.288 Optional Asynchronous Events Supported 00:32:50.288 Namespace Attribute Notices: Not Supported 00:32:50.288 Firmware Activation Notices: Not Supported 00:32:50.288 ANA Change Notices: Not Supported 00:32:50.288 PLE Aggregate Log Change Notices: Not Supported 00:32:50.288 LBA Status Info Alert Notices: Not Supported 00:32:50.288 EGE Aggregate Log Change Notices: Not Supported 00:32:50.288 Normal NVM Subsystem Shutdown event: Not Supported 00:32:50.288 Zone Descriptor Change Notices: Not Supported 00:32:50.288 Discovery Log Change Notices: Supported 00:32:50.288 Controller Attributes 00:32:50.288 128-bit Host Identifier: Not Supported 00:32:50.288 Non-Operational Permissive Mode: Not Supported 00:32:50.288 NVM Sets: Not Supported 00:32:50.288 Read Recovery Levels: Not Supported 00:32:50.288 Endurance Groups: Not Supported 00:32:50.288 Predictable Latency Mode: Not Supported 00:32:50.288 Traffic Based Keep ALive: Not Supported 00:32:50.288 Namespace Granularity: Not Supported 00:32:50.288 SQ Associations: Not Supported 00:32:50.288 UUID List: Not Supported 00:32:50.288 Multi-Domain Subsystem: Not Supported 00:32:50.288 Fixed Capacity Management: Not Supported 00:32:50.288 Variable Capacity Management: Not Supported 00:32:50.288 Delete Endurance Group: Not Supported 00:32:50.288 Delete NVM Set: Not Supported 00:32:50.288 Extended LBA Formats Supported: Not Supported 00:32:50.288 Flexible Data Placement Supported: Not Supported 00:32:50.288 00:32:50.288 Controller Memory Buffer Support 00:32:50.288 ================================ 00:32:50.288 Supported: No 00:32:50.288 00:32:50.288 Persistent Memory Region Support 00:32:50.288 ================================ 00:32:50.288 Supported: No 00:32:50.288 00:32:50.288 Admin Command Set Attributes 00:32:50.288 ============================ 00:32:50.288 Security Send/Receive: Not Supported 00:32:50.289 Format NVM: Not Supported 00:32:50.289 Firmware Activate/Download: Not Supported 00:32:50.289 Namespace Management: Not Supported 00:32:50.289 Device Self-Test: Not Supported 00:32:50.289 Directives: Not Supported 00:32:50.289 NVMe-MI: Not Supported 00:32:50.289 Virtualization Management: Not Supported 00:32:50.289 Doorbell Buffer Config: Not Supported 00:32:50.289 Get LBA Status Capability: Not Supported 00:32:50.289 Command & Feature Lockdown Capability: Not Supported 00:32:50.289 Abort Command Limit: 1 00:32:50.289 Async Event Request Limit: 1 00:32:50.289 Number of Firmware Slots: N/A 00:32:50.289 Firmware Slot 1 Read-Only: N/A 00:32:50.289 Firmware Activation Without Reset: N/A 00:32:50.289 Multiple Update Detection Support: N/A 00:32:50.289 Firmware Update Granularity: No Information Provided 00:32:50.289 Per-Namespace SMART Log: No 00:32:50.289 Asymmetric Namespace Access Log Page: Not Supported 00:32:50.289 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:32:50.289 Command Effects Log Page: Not Supported 00:32:50.289 Get Log Page Extended Data: Supported 00:32:50.289 Telemetry Log Pages: Not Supported 00:32:50.289 Persistent Event Log Pages: Not Supported 00:32:50.289 Supported Log Pages Log Page: May Support 00:32:50.289 Commands Supported & Effects Log Page: Not Supported 00:32:50.289 Feature Identifiers & Effects Log Page:May Support 00:32:50.289 NVMe-MI Commands & Effects Log Page: May Support 00:32:50.289 Data Area 4 for Telemetry Log: Not Supported 00:32:50.289 Error Log Page Entries Supported: 1 00:32:50.289 Keep Alive: Not Supported 00:32:50.289 00:32:50.289 NVM Command Set Attributes 00:32:50.289 ========================== 00:32:50.289 Submission Queue Entry Size 00:32:50.289 Max: 1 00:32:50.289 Min: 1 00:32:50.289 Completion Queue Entry Size 00:32:50.289 Max: 1 00:32:50.289 Min: 1 00:32:50.289 Number of Namespaces: 0 00:32:50.289 Compare Command: Not Supported 00:32:50.289 Write Uncorrectable Command: Not Supported 00:32:50.289 Dataset Management Command: Not Supported 00:32:50.289 Write Zeroes Command: Not Supported 00:32:50.289 Set Features Save Field: Not Supported 00:32:50.289 Reservations: Not Supported 00:32:50.289 Timestamp: Not Supported 00:32:50.289 Copy: Not Supported 00:32:50.289 Volatile Write Cache: Not Present 00:32:50.289 Atomic Write Unit (Normal): 1 00:32:50.289 Atomic Write Unit (PFail): 1 00:32:50.289 Atomic Compare & Write Unit: 1 00:32:50.289 Fused Compare & Write: Not Supported 00:32:50.289 Scatter-Gather List 00:32:50.289 SGL Command Set: Supported 00:32:50.289 SGL Keyed: Not Supported 00:32:50.289 SGL Bit Bucket Descriptor: Not Supported 00:32:50.289 SGL Metadata Pointer: Not Supported 00:32:50.289 Oversized SGL: Not Supported 00:32:50.289 SGL Metadata Address: Not Supported 00:32:50.289 SGL Offset: Supported 00:32:50.289 Transport SGL Data Block: Not Supported 00:32:50.289 Replay Protected Memory Block: Not Supported 00:32:50.289 00:32:50.289 Firmware Slot Information 00:32:50.289 ========================= 00:32:50.289 Active slot: 0 00:32:50.289 00:32:50.289 00:32:50.289 Error Log 00:32:50.289 ========= 00:32:50.289 00:32:50.289 Active Namespaces 00:32:50.289 ================= 00:32:50.289 Discovery Log Page 00:32:50.289 ================== 00:32:50.289 Generation Counter: 2 00:32:50.289 Number of Records: 2 00:32:50.289 Record Format: 0 00:32:50.289 00:32:50.289 Discovery Log Entry 0 00:32:50.289 ---------------------- 00:32:50.289 Transport Type: 3 (TCP) 00:32:50.289 Address Family: 1 (IPv4) 00:32:50.289 Subsystem Type: 3 (Current Discovery Subsystem) 00:32:50.289 Entry Flags: 00:32:50.289 Duplicate Returned Information: 0 00:32:50.289 Explicit Persistent Connection Support for Discovery: 0 00:32:50.289 Transport Requirements: 00:32:50.289 Secure Channel: Not Specified 00:32:50.289 Port ID: 1 (0x0001) 00:32:50.289 Controller ID: 65535 (0xffff) 00:32:50.289 Admin Max SQ Size: 32 00:32:50.289 Transport Service Identifier: 4420 00:32:50.289 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:32:50.289 Transport Address: 10.0.0.1 00:32:50.289 Discovery Log Entry 1 00:32:50.289 ---------------------- 00:32:50.289 Transport Type: 3 (TCP) 00:32:50.289 Address Family: 1 (IPv4) 00:32:50.289 Subsystem Type: 2 (NVM Subsystem) 00:32:50.289 Entry Flags: 00:32:50.289 Duplicate Returned Information: 0 00:32:50.289 Explicit Persistent Connection Support for Discovery: 0 00:32:50.289 Transport Requirements: 00:32:50.289 Secure Channel: Not Specified 00:32:50.289 Port ID: 1 (0x0001) 00:32:50.289 Controller ID: 65535 (0xffff) 00:32:50.289 Admin Max SQ Size: 32 00:32:50.289 Transport Service Identifier: 4420 00:32:50.289 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:32:50.289 Transport Address: 10.0.0.1 00:32:50.289 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:50.289 EAL: No free 2048 kB hugepages reported on node 1 00:32:50.548 get_feature(0x01) failed 00:32:50.548 get_feature(0x02) failed 00:32:50.548 get_feature(0x04) failed 00:32:50.548 ===================================================== 00:32:50.548 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:50.548 ===================================================== 00:32:50.548 Controller Capabilities/Features 00:32:50.548 ================================ 00:32:50.548 Vendor ID: 0000 00:32:50.548 Subsystem Vendor ID: 0000 00:32:50.548 Serial Number: f0f95223e48f9975f793 00:32:50.548 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:32:50.549 Firmware Version: 6.7.0-68 00:32:50.549 Recommended Arb Burst: 6 00:32:50.549 IEEE OUI Identifier: 00 00 00 00:32:50.549 Multi-path I/O 00:32:50.549 May have multiple subsystem ports: Yes 00:32:50.549 May have multiple controllers: Yes 00:32:50.549 Associated with SR-IOV VF: No 00:32:50.549 Max Data Transfer Size: Unlimited 00:32:50.549 Max Number of Namespaces: 1024 00:32:50.549 Max Number of I/O Queues: 128 00:32:50.549 NVMe Specification Version (VS): 1.3 00:32:50.549 NVMe Specification Version (Identify): 1.3 00:32:50.549 Maximum Queue Entries: 1024 00:32:50.549 Contiguous Queues Required: No 00:32:50.549 Arbitration Mechanisms Supported 00:32:50.549 Weighted Round Robin: Not Supported 00:32:50.549 Vendor Specific: Not Supported 00:32:50.549 Reset Timeout: 7500 ms 00:32:50.549 Doorbell Stride: 4 bytes 00:32:50.549 NVM Subsystem Reset: Not Supported 00:32:50.549 Command Sets Supported 00:32:50.549 NVM Command Set: Supported 00:32:50.549 Boot Partition: Not Supported 00:32:50.549 Memory Page Size Minimum: 4096 bytes 00:32:50.549 Memory Page Size Maximum: 4096 bytes 00:32:50.549 Persistent Memory Region: Not Supported 00:32:50.549 Optional Asynchronous Events Supported 00:32:50.549 Namespace Attribute Notices: Supported 00:32:50.549 Firmware Activation Notices: Not Supported 00:32:50.549 ANA Change Notices: Supported 00:32:50.549 PLE Aggregate Log Change Notices: Not Supported 00:32:50.549 LBA Status Info Alert Notices: Not Supported 00:32:50.549 EGE Aggregate Log Change Notices: Not Supported 00:32:50.549 Normal NVM Subsystem Shutdown event: Not Supported 00:32:50.549 Zone Descriptor Change Notices: Not Supported 00:32:50.549 Discovery Log Change Notices: Not Supported 00:32:50.549 Controller Attributes 00:32:50.549 128-bit Host Identifier: Supported 00:32:50.549 Non-Operational Permissive Mode: Not Supported 00:32:50.549 NVM Sets: Not Supported 00:32:50.549 Read Recovery Levels: Not Supported 00:32:50.549 Endurance Groups: Not Supported 00:32:50.549 Predictable Latency Mode: Not Supported 00:32:50.549 Traffic Based Keep ALive: Supported 00:32:50.549 Namespace Granularity: Not Supported 00:32:50.549 SQ Associations: Not Supported 00:32:50.549 UUID List: Not Supported 00:32:50.549 Multi-Domain Subsystem: Not Supported 00:32:50.549 Fixed Capacity Management: Not Supported 00:32:50.549 Variable Capacity Management: Not Supported 00:32:50.549 Delete Endurance Group: Not Supported 00:32:50.549 Delete NVM Set: Not Supported 00:32:50.549 Extended LBA Formats Supported: Not Supported 00:32:50.549 Flexible Data Placement Supported: Not Supported 00:32:50.549 00:32:50.549 Controller Memory Buffer Support 00:32:50.549 ================================ 00:32:50.549 Supported: No 00:32:50.549 00:32:50.549 Persistent Memory Region Support 00:32:50.549 ================================ 00:32:50.549 Supported: No 00:32:50.549 00:32:50.549 Admin Command Set Attributes 00:32:50.549 ============================ 00:32:50.549 Security Send/Receive: Not Supported 00:32:50.549 Format NVM: Not Supported 00:32:50.549 Firmware Activate/Download: Not Supported 00:32:50.549 Namespace Management: Not Supported 00:32:50.549 Device Self-Test: Not Supported 00:32:50.549 Directives: Not Supported 00:32:50.549 NVMe-MI: Not Supported 00:32:50.549 Virtualization Management: Not Supported 00:32:50.549 Doorbell Buffer Config: Not Supported 00:32:50.549 Get LBA Status Capability: Not Supported 00:32:50.549 Command & Feature Lockdown Capability: Not Supported 00:32:50.549 Abort Command Limit: 4 00:32:50.549 Async Event Request Limit: 4 00:32:50.549 Number of Firmware Slots: N/A 00:32:50.549 Firmware Slot 1 Read-Only: N/A 00:32:50.549 Firmware Activation Without Reset: N/A 00:32:50.549 Multiple Update Detection Support: N/A 00:32:50.549 Firmware Update Granularity: No Information Provided 00:32:50.549 Per-Namespace SMART Log: Yes 00:32:50.549 Asymmetric Namespace Access Log Page: Supported 00:32:50.549 ANA Transition Time : 10 sec 00:32:50.549 00:32:50.549 Asymmetric Namespace Access Capabilities 00:32:50.549 ANA Optimized State : Supported 00:32:50.549 ANA Non-Optimized State : Supported 00:32:50.549 ANA Inaccessible State : Supported 00:32:50.549 ANA Persistent Loss State : Supported 00:32:50.549 ANA Change State : Supported 00:32:50.549 ANAGRPID is not changed : No 00:32:50.549 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:32:50.549 00:32:50.549 ANA Group Identifier Maximum : 128 00:32:50.549 Number of ANA Group Identifiers : 128 00:32:50.549 Max Number of Allowed Namespaces : 1024 00:32:50.549 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:32:50.549 Command Effects Log Page: Supported 00:32:50.549 Get Log Page Extended Data: Supported 00:32:50.549 Telemetry Log Pages: Not Supported 00:32:50.549 Persistent Event Log Pages: Not Supported 00:32:50.549 Supported Log Pages Log Page: May Support 00:32:50.549 Commands Supported & Effects Log Page: Not Supported 00:32:50.549 Feature Identifiers & Effects Log Page:May Support 00:32:50.549 NVMe-MI Commands & Effects Log Page: May Support 00:32:50.549 Data Area 4 for Telemetry Log: Not Supported 00:32:50.549 Error Log Page Entries Supported: 128 00:32:50.549 Keep Alive: Supported 00:32:50.549 Keep Alive Granularity: 1000 ms 00:32:50.549 00:32:50.549 NVM Command Set Attributes 00:32:50.549 ========================== 00:32:50.549 Submission Queue Entry Size 00:32:50.549 Max: 64 00:32:50.549 Min: 64 00:32:50.549 Completion Queue Entry Size 00:32:50.549 Max: 16 00:32:50.549 Min: 16 00:32:50.549 Number of Namespaces: 1024 00:32:50.549 Compare Command: Not Supported 00:32:50.549 Write Uncorrectable Command: Not Supported 00:32:50.549 Dataset Management Command: Supported 00:32:50.549 Write Zeroes Command: Supported 00:32:50.549 Set Features Save Field: Not Supported 00:32:50.549 Reservations: Not Supported 00:32:50.549 Timestamp: Not Supported 00:32:50.549 Copy: Not Supported 00:32:50.549 Volatile Write Cache: Present 00:32:50.549 Atomic Write Unit (Normal): 1 00:32:50.549 Atomic Write Unit (PFail): 1 00:32:50.549 Atomic Compare & Write Unit: 1 00:32:50.549 Fused Compare & Write: Not Supported 00:32:50.549 Scatter-Gather List 00:32:50.549 SGL Command Set: Supported 00:32:50.549 SGL Keyed: Not Supported 00:32:50.549 SGL Bit Bucket Descriptor: Not Supported 00:32:50.549 SGL Metadata Pointer: Not Supported 00:32:50.549 Oversized SGL: Not Supported 00:32:50.549 SGL Metadata Address: Not Supported 00:32:50.549 SGL Offset: Supported 00:32:50.549 Transport SGL Data Block: Not Supported 00:32:50.549 Replay Protected Memory Block: Not Supported 00:32:50.549 00:32:50.549 Firmware Slot Information 00:32:50.549 ========================= 00:32:50.549 Active slot: 0 00:32:50.549 00:32:50.549 Asymmetric Namespace Access 00:32:50.549 =========================== 00:32:50.549 Change Count : 0 00:32:50.549 Number of ANA Group Descriptors : 1 00:32:50.549 ANA Group Descriptor : 0 00:32:50.549 ANA Group ID : 1 00:32:50.549 Number of NSID Values : 1 00:32:50.549 Change Count : 0 00:32:50.549 ANA State : 1 00:32:50.549 Namespace Identifier : 1 00:32:50.549 00:32:50.549 Commands Supported and Effects 00:32:50.549 ============================== 00:32:50.549 Admin Commands 00:32:50.549 -------------- 00:32:50.549 Get Log Page (02h): Supported 00:32:50.549 Identify (06h): Supported 00:32:50.549 Abort (08h): Supported 00:32:50.549 Set Features (09h): Supported 00:32:50.549 Get Features (0Ah): Supported 00:32:50.549 Asynchronous Event Request (0Ch): Supported 00:32:50.549 Keep Alive (18h): Supported 00:32:50.549 I/O Commands 00:32:50.549 ------------ 00:32:50.549 Flush (00h): Supported 00:32:50.549 Write (01h): Supported LBA-Change 00:32:50.549 Read (02h): Supported 00:32:50.549 Write Zeroes (08h): Supported LBA-Change 00:32:50.549 Dataset Management (09h): Supported 00:32:50.549 00:32:50.549 Error Log 00:32:50.549 ========= 00:32:50.549 Entry: 0 00:32:50.549 Error Count: 0x3 00:32:50.549 Submission Queue Id: 0x0 00:32:50.549 Command Id: 0x5 00:32:50.549 Phase Bit: 0 00:32:50.549 Status Code: 0x2 00:32:50.549 Status Code Type: 0x0 00:32:50.549 Do Not Retry: 1 00:32:50.549 Error Location: 0x28 00:32:50.549 LBA: 0x0 00:32:50.549 Namespace: 0x0 00:32:50.549 Vendor Log Page: 0x0 00:32:50.549 ----------- 00:32:50.549 Entry: 1 00:32:50.549 Error Count: 0x2 00:32:50.549 Submission Queue Id: 0x0 00:32:50.549 Command Id: 0x5 00:32:50.549 Phase Bit: 0 00:32:50.549 Status Code: 0x2 00:32:50.549 Status Code Type: 0x0 00:32:50.549 Do Not Retry: 1 00:32:50.549 Error Location: 0x28 00:32:50.549 LBA: 0x0 00:32:50.549 Namespace: 0x0 00:32:50.549 Vendor Log Page: 0x0 00:32:50.549 ----------- 00:32:50.549 Entry: 2 00:32:50.549 Error Count: 0x1 00:32:50.549 Submission Queue Id: 0x0 00:32:50.549 Command Id: 0x4 00:32:50.549 Phase Bit: 0 00:32:50.549 Status Code: 0x2 00:32:50.549 Status Code Type: 0x0 00:32:50.549 Do Not Retry: 1 00:32:50.549 Error Location: 0x28 00:32:50.549 LBA: 0x0 00:32:50.549 Namespace: 0x0 00:32:50.550 Vendor Log Page: 0x0 00:32:50.550 00:32:50.550 Number of Queues 00:32:50.550 ================ 00:32:50.550 Number of I/O Submission Queues: 128 00:32:50.550 Number of I/O Completion Queues: 128 00:32:50.550 00:32:50.550 ZNS Specific Controller Data 00:32:50.550 ============================ 00:32:50.550 Zone Append Size Limit: 0 00:32:50.550 00:32:50.550 00:32:50.550 Active Namespaces 00:32:50.550 ================= 00:32:50.550 get_feature(0x05) failed 00:32:50.550 Namespace ID:1 00:32:50.550 Command Set Identifier: NVM (00h) 00:32:50.550 Deallocate: Supported 00:32:50.550 Deallocated/Unwritten Error: Not Supported 00:32:50.550 Deallocated Read Value: Unknown 00:32:50.550 Deallocate in Write Zeroes: Not Supported 00:32:50.550 Deallocated Guard Field: 0xFFFF 00:32:50.550 Flush: Supported 00:32:50.550 Reservation: Not Supported 00:32:50.550 Namespace Sharing Capabilities: Multiple Controllers 00:32:50.550 Size (in LBAs): 1953525168 (931GiB) 00:32:50.550 Capacity (in LBAs): 1953525168 (931GiB) 00:32:50.550 Utilization (in LBAs): 1953525168 (931GiB) 00:32:50.550 UUID: 6a5f4871-6eb8-4486-bd7c-22dc0881f83e 00:32:50.550 Thin Provisioning: Not Supported 00:32:50.550 Per-NS Atomic Units: Yes 00:32:50.550 Atomic Boundary Size (Normal): 0 00:32:50.550 Atomic Boundary Size (PFail): 0 00:32:50.550 Atomic Boundary Offset: 0 00:32:50.550 NGUID/EUI64 Never Reused: No 00:32:50.550 ANA group ID: 1 00:32:50.550 Namespace Write Protected: No 00:32:50.550 Number of LBA Formats: 1 00:32:50.550 Current LBA Format: LBA Format #00 00:32:50.550 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:50.550 00:32:50.550 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:32:50.550 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:50.550 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:32:50.550 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:50.550 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:32:50.550 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:50.550 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:50.550 rmmod nvme_tcp 00:32:50.550 rmmod nvme_fabrics 00:32:50.550 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:50.550 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:32:50.550 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:32:50.550 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:32:50.550 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:50.550 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:50.550 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:50.550 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:50.550 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:50.550 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:50.550 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:50.550 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:52.454 09:42:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:52.454 09:42:36 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:32:52.454 09:42:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:52.454 09:42:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:32:52.454 09:42:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:52.454 09:42:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:52.454 09:42:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:52.454 09:42:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:52.454 09:42:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:52.454 09:42:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:52.712 09:42:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:53.646 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:53.647 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:53.647 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:53.647 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:53.647 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:53.647 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:53.647 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:53.906 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:53.906 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:53.906 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:53.906 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:53.906 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:53.906 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:53.906 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:53.906 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:53.906 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:54.841 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:54.841 00:32:54.841 real 0m9.497s 00:32:54.841 user 0m2.072s 00:32:54.841 sys 0m3.409s 00:32:54.842 09:42:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:54.842 09:42:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:54.842 ************************************ 00:32:54.842 END TEST nvmf_identify_kernel_target 00:32:54.842 ************************************ 00:32:54.842 09:42:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:54.842 09:42:39 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:54.842 09:42:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:54.842 09:42:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:54.842 09:42:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:54.842 ************************************ 00:32:54.842 START TEST nvmf_auth_host 00:32:54.842 ************************************ 00:32:54.842 09:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:55.106 * Looking for test storage... 00:32:55.106 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:55.106 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:55.106 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:32:55.106 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:55.106 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:55.106 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:55.106 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:55.106 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:55.106 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:55.106 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:55.106 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:55.106 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:55.106 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:55.106 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:55.106 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:55.106 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:55.106 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:55.106 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:55.106 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:55.106 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:55.106 09:42:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:55.106 09:42:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:55.106 09:42:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:55.106 09:42:39 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.107 09:42:39 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.107 09:42:39 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.107 09:42:39 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:32:55.107 09:42:39 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.107 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:32:55.107 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:55.107 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:55.107 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:55.107 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:55.107 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:55.107 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:55.107 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:55.107 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:55.107 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:32:55.107 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:32:55.107 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:32:55.107 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:32:55.107 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:55.107 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:55.107 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:32:55.107 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:32:55.107 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:32:55.107 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:55.107 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:55.107 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:55.107 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:55.107 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:55.107 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:55.107 09:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:55.107 09:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:55.107 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:55.107 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:55.107 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:32:55.107 09:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.053 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:57.053 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:32:57.053 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:57.054 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:57.054 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:57.054 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:57.054 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:57.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:57.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:32:57.054 00:32:57.054 --- 10.0.0.2 ping statistics --- 00:32:57.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:57.054 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:57.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:57.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:32:57.054 00:32:57.054 --- 10.0.0.1 ping statistics --- 00:32:57.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:57.054 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=879098 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 879098 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 879098 ']' 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:57.054 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.314 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:57.314 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:32:57.314 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:57.314 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:57.314 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.314 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b93747d713f2465241dd4c1322f14b0e 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.pMz 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b93747d713f2465241dd4c1322f14b0e 0 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b93747d713f2465241dd4c1322f14b0e 0 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b93747d713f2465241dd4c1322f14b0e 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.pMz 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.pMz 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.pMz 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=171c996c63046f832b0b067e169cb08ddea4fc44f5999b4d95c8c670938386cf 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.jed 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 171c996c63046f832b0b067e169cb08ddea4fc44f5999b4d95c8c670938386cf 3 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 171c996c63046f832b0b067e169cb08ddea4fc44f5999b4d95c8c670938386cf 3 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=171c996c63046f832b0b067e169cb08ddea4fc44f5999b4d95c8c670938386cf 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.jed 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.jed 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.jed 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=554f67f2d8766b9e867dcd1fd92b20da9d51bffbd66c1b71 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.OiM 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 554f67f2d8766b9e867dcd1fd92b20da9d51bffbd66c1b71 0 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 554f67f2d8766b9e867dcd1fd92b20da9d51bffbd66c1b71 0 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=554f67f2d8766b9e867dcd1fd92b20da9d51bffbd66c1b71 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.OiM 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.OiM 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.OiM 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=711c9d4c8c4ef83928a4a7406dc643c4d357ce5264eb6af0 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.rDA 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 711c9d4c8c4ef83928a4a7406dc643c4d357ce5264eb6af0 2 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 711c9d4c8c4ef83928a4a7406dc643c4d357ce5264eb6af0 2 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=711c9d4c8c4ef83928a4a7406dc643c4d357ce5264eb6af0 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.rDA 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.rDA 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.rDA 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:57.588 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:57.589 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:57.589 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:57.589 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:57.589 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:57.589 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c6f9ba394a2cb655ac595ef1d1cd2a79 00:32:57.589 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:57.589 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.BnO 00:32:57.589 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c6f9ba394a2cb655ac595ef1d1cd2a79 1 00:32:57.589 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c6f9ba394a2cb655ac595ef1d1cd2a79 1 00:32:57.589 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:57.589 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:57.589 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c6f9ba394a2cb655ac595ef1d1cd2a79 00:32:57.589 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:57.589 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.BnO 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.BnO 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.BnO 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3bb56fa18d28c76ccbe59aa2b88fa1da 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.MDy 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3bb56fa18d28c76ccbe59aa2b88fa1da 1 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3bb56fa18d28c76ccbe59aa2b88fa1da 1 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3bb56fa18d28c76ccbe59aa2b88fa1da 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.MDy 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.MDy 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.MDy 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=05b4097673d5016b055d6e74a6b13e612de746c37d37d3b9 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.TA0 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 05b4097673d5016b055d6e74a6b13e612de746c37d37d3b9 2 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 05b4097673d5016b055d6e74a6b13e612de746c37d37d3b9 2 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=05b4097673d5016b055d6e74a6b13e612de746c37d37d3b9 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.TA0 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.TA0 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.TA0 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ab2b9b4d2e10fb554abe2f52a127efc2 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.aBT 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ab2b9b4d2e10fb554abe2f52a127efc2 0 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ab2b9b4d2e10fb554abe2f52a127efc2 0 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ab2b9b4d2e10fb554abe2f52a127efc2 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.aBT 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.aBT 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.aBT 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=99c7845fa9fb7adf75d731f3d56217af75a9bae75308e004fda183657975c4e4 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.D5r 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 99c7845fa9fb7adf75d731f3d56217af75a9bae75308e004fda183657975c4e4 3 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 99c7845fa9fb7adf75d731f3d56217af75a9bae75308e004fda183657975c4e4 3 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=99c7845fa9fb7adf75d731f3d56217af75a9bae75308e004fda183657975c4e4 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.D5r 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.D5r 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.D5r 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 879098 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 879098 ']' 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:57.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:57.846 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.410 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:58.410 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:32:58.410 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.pMz 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.jed ]] 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.jed 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.OiM 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.rDA ]] 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.rDA 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.BnO 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.MDy ]] 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.MDy 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.TA0 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.aBT ]] 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.aBT 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.D5r 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:58.411 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:59.342 Waiting for block devices as requested 00:32:59.342 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:59.342 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:59.600 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:59.600 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:59.600 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:59.858 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:59.858 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:59.858 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:59.858 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:00.117 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:00.117 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:00.117 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:00.374 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:00.374 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:00.374 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:00.374 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:00.631 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:00.889 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:33:00.889 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:00.889 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:33:00.889 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:33:00.889 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:00.889 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:33:00.889 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:33:00.889 09:42:45 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:33:00.890 09:42:45 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:01.149 No valid GPT data, bailing 00:33:01.149 09:42:45 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:01.149 09:42:45 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:33:01.149 09:42:45 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:33:01.149 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:33:01.149 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:33:01.149 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:01.149 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:01.149 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:01.149 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:33:01.149 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:33:01.149 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:33:01.149 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:33:01.149 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:33:01.149 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:33:01.149 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:33:01.149 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:33:01.149 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:01.149 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:33:01.149 00:33:01.149 Discovery Log Number of Records 2, Generation counter 2 00:33:01.149 =====Discovery Log Entry 0====== 00:33:01.149 trtype: tcp 00:33:01.149 adrfam: ipv4 00:33:01.149 subtype: current discovery subsystem 00:33:01.149 treq: not specified, sq flow control disable supported 00:33:01.149 portid: 1 00:33:01.149 trsvcid: 4420 00:33:01.149 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:01.149 traddr: 10.0.0.1 00:33:01.149 eflags: none 00:33:01.149 sectype: none 00:33:01.149 =====Discovery Log Entry 1====== 00:33:01.149 trtype: tcp 00:33:01.149 adrfam: ipv4 00:33:01.149 subtype: nvme subsystem 00:33:01.149 treq: not specified, sq flow control disable supported 00:33:01.149 portid: 1 00:33:01.149 trsvcid: 4420 00:33:01.149 subnqn: nqn.2024-02.io.spdk:cnode0 00:33:01.149 traddr: 10.0.0.1 00:33:01.149 eflags: none 00:33:01.149 sectype: none 00:33:01.149 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:01.149 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:33:01.149 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:01.149 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:01.149 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.149 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:01.149 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:01.149 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:01.149 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTU0ZjY3ZjJkODc2NmI5ZTg2N2RjZDFmZDkyYjIwZGE5ZDUxYmZmYmQ2NmMxYjcxOuJ+qQ==: 00:33:01.149 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: 00:33:01.149 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:01.149 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:01.149 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTU0ZjY3ZjJkODc2NmI5ZTg2N2RjZDFmZDkyYjIwZGE5ZDUxYmZmYmQ2NmMxYjcxOuJ+qQ==: 00:33:01.149 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: ]] 00:33:01.149 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: 00:33:01.149 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:33:01.149 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:33:01.149 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:33:01.149 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:01.149 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:33:01.149 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.150 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:33:01.150 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:01.150 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:01.150 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.150 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:01.150 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.150 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.150 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.150 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.150 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.150 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.150 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.150 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.150 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.150 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.150 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.150 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.150 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.150 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.150 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:01.150 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.150 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.409 nvme0n1 00:33:01.409 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.409 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.409 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.409 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.409 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.409 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.409 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.409 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.409 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.409 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.409 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.409 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:01.409 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:01.409 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.409 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:33:01.409 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.409 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:01.409 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:01.409 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:01.409 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkzNzQ3ZDcxM2YyNDY1MjQxZGQ0YzEzMjJmMTRiMGXzFvzn: 00:33:01.409 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTcxYzk5NmM2MzA0NmY4MzJiMGIwNjdlMTY5Y2IwOGRkZWE0ZmM0NGY1OTk5YjRkOTVjOGM2NzA5MzgzODZjZmQbMFE=: 00:33:01.409 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:01.409 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:01.409 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkzNzQ3ZDcxM2YyNDY1MjQxZGQ0YzEzMjJmMTRiMGXzFvzn: 00:33:01.409 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTcxYzk5NmM2MzA0NmY4MzJiMGIwNjdlMTY5Y2IwOGRkZWE0ZmM0NGY1OTk5YjRkOTVjOGM2NzA5MzgzODZjZmQbMFE=: ]] 00:33:01.409 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTcxYzk5NmM2MzA0NmY4MzJiMGIwNjdlMTY5Y2IwOGRkZWE0ZmM0NGY1OTk5YjRkOTVjOGM2NzA5MzgzODZjZmQbMFE=: 00:33:01.409 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:33:01.409 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.409 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:01.409 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:01.409 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:01.409 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.409 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:01.409 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.409 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.409 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.409 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.409 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.409 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.409 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.409 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.409 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.409 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.409 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.409 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.409 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.409 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.409 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:01.409 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.409 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.668 nvme0n1 00:33:01.668 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.668 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.668 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.668 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.668 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.668 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.668 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.668 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.668 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.668 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.668 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.668 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.668 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:01.668 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.668 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:01.668 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:01.668 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:01.668 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTU0ZjY3ZjJkODc2NmI5ZTg2N2RjZDFmZDkyYjIwZGE5ZDUxYmZmYmQ2NmMxYjcxOuJ+qQ==: 00:33:01.668 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: 00:33:01.668 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:01.668 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:01.668 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTU0ZjY3ZjJkODc2NmI5ZTg2N2RjZDFmZDkyYjIwZGE5ZDUxYmZmYmQ2NmMxYjcxOuJ+qQ==: 00:33:01.668 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: ]] 00:33:01.668 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: 00:33:01.668 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:33:01.668 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.668 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:01.668 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:01.668 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:01.668 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.668 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:01.668 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.668 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.668 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.668 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.668 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.668 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.668 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.668 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.668 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.668 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.668 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.668 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.668 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.668 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.668 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:01.668 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.668 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.926 nvme0n1 00:33:01.926 09:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.926 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.926 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.926 09:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.926 09:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.926 09:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.926 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.926 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.926 09:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.926 09:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.926 09:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.926 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.926 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:33:01.926 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.926 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:01.926 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:01.926 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:01.926 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzZmOWJhMzk0YTJjYjY1NWFjNTk1ZWYxZDFjZDJhNzkpLKwQ: 00:33:01.926 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2JiNTZmYTE4ZDI4Yzc2Y2NiZTU5YWEyYjg4ZmExZGFsP2rI: 00:33:01.926 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:01.926 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:01.926 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzZmOWJhMzk0YTJjYjY1NWFjNTk1ZWYxZDFjZDJhNzkpLKwQ: 00:33:01.926 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2JiNTZmYTE4ZDI4Yzc2Y2NiZTU5YWEyYjg4ZmExZGFsP2rI: ]] 00:33:01.926 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2JiNTZmYTE4ZDI4Yzc2Y2NiZTU5YWEyYjg4ZmExZGFsP2rI: 00:33:01.926 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:33:01.926 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.926 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:01.926 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:01.926 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:01.926 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.926 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:01.926 09:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.926 09:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.927 09:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.927 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.927 09:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.927 09:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.927 09:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.927 09:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.927 09:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.927 09:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.927 09:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.927 09:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.927 09:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.927 09:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.927 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:01.927 09:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.927 09:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.927 nvme0n1 00:33:01.927 09:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.927 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.927 09:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.927 09:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.927 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.927 09:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDViNDA5NzY3M2Q1MDE2YjA1NWQ2ZTc0YTZiMTNlNjEyZGU3NDZjMzdkMzdkM2I5rKGuKA==: 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWIyYjliNGQyZTEwZmI1NTRhYmUyZjUyYTEyN2VmYzIJweoA: 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDViNDA5NzY3M2Q1MDE2YjA1NWQ2ZTc0YTZiMTNlNjEyZGU3NDZjMzdkMzdkM2I5rKGuKA==: 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWIyYjliNGQyZTEwZmI1NTRhYmUyZjUyYTEyN2VmYzIJweoA: ]] 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWIyYjliNGQyZTEwZmI1NTRhYmUyZjUyYTEyN2VmYzIJweoA: 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.185 nvme0n1 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.185 09:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.186 09:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.444 09:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.444 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.444 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:33:02.444 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.444 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:02.444 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:02.444 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:02.444 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTljNzg0NWZhOWZiN2FkZjc1ZDczMWYzZDU2MjE3YWY3NWE5YmFlNzUzMDhlMDA0ZmRhMTgzNjU3OTc1YzRlNNKnvm8=: 00:33:02.444 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:02.444 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:02.444 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:02.444 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTljNzg0NWZhOWZiN2FkZjc1ZDczMWYzZDU2MjE3YWY3NWE5YmFlNzUzMDhlMDA0ZmRhMTgzNjU3OTc1YzRlNNKnvm8=: 00:33:02.444 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:02.444 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:33:02.444 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.444 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:02.444 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:02.444 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:02.444 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.444 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:02.444 09:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.444 09:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.444 09:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.444 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.444 09:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.444 09:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.444 09:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.444 09:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.444 09:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.444 09:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.444 09:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.444 09:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.444 09:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.444 09:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.444 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:02.444 09:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.444 09:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.444 nvme0n1 00:33:02.444 09:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.444 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.444 09:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.444 09:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.444 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.444 09:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.444 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.445 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.445 09:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.445 09:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.445 09:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.445 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:02.445 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.445 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:33:02.445 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.445 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:02.445 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:02.445 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:02.445 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkzNzQ3ZDcxM2YyNDY1MjQxZGQ0YzEzMjJmMTRiMGXzFvzn: 00:33:02.445 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTcxYzk5NmM2MzA0NmY4MzJiMGIwNjdlMTY5Y2IwOGRkZWE0ZmM0NGY1OTk5YjRkOTVjOGM2NzA5MzgzODZjZmQbMFE=: 00:33:02.445 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:02.445 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:02.445 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkzNzQ3ZDcxM2YyNDY1MjQxZGQ0YzEzMjJmMTRiMGXzFvzn: 00:33:02.445 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTcxYzk5NmM2MzA0NmY4MzJiMGIwNjdlMTY5Y2IwOGRkZWE0ZmM0NGY1OTk5YjRkOTVjOGM2NzA5MzgzODZjZmQbMFE=: ]] 00:33:02.445 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTcxYzk5NmM2MzA0NmY4MzJiMGIwNjdlMTY5Y2IwOGRkZWE0ZmM0NGY1OTk5YjRkOTVjOGM2NzA5MzgzODZjZmQbMFE=: 00:33:02.445 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:33:02.445 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.445 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:02.445 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:02.445 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:02.445 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.445 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:02.445 09:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.445 09:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.445 09:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.445 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.445 09:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.445 09:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.445 09:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.445 09:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.445 09:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.445 09:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.445 09:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.445 09:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.445 09:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.445 09:42:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.445 09:42:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:02.445 09:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.445 09:42:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.703 nvme0n1 00:33:02.703 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.703 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.703 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.703 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.703 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.703 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.703 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.703 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.703 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.703 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.703 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.703 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.703 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:33:02.703 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.703 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:02.703 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:02.703 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:02.703 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTU0ZjY3ZjJkODc2NmI5ZTg2N2RjZDFmZDkyYjIwZGE5ZDUxYmZmYmQ2NmMxYjcxOuJ+qQ==: 00:33:02.703 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: 00:33:02.703 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:02.703 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:02.703 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTU0ZjY3ZjJkODc2NmI5ZTg2N2RjZDFmZDkyYjIwZGE5ZDUxYmZmYmQ2NmMxYjcxOuJ+qQ==: 00:33:02.703 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: ]] 00:33:02.703 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: 00:33:02.703 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:33:02.703 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.703 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:02.703 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:02.703 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:02.703 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.704 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:02.704 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.704 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.704 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.704 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.704 09:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.704 09:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.704 09:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.704 09:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.704 09:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.704 09:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.704 09:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.704 09:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.704 09:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.704 09:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.704 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:02.704 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.704 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.962 nvme0n1 00:33:02.962 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.962 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.962 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.962 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.962 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.962 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.962 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.962 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.962 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.962 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.962 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.962 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.962 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:33:02.962 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.962 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:02.962 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:02.962 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:02.962 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzZmOWJhMzk0YTJjYjY1NWFjNTk1ZWYxZDFjZDJhNzkpLKwQ: 00:33:02.962 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2JiNTZmYTE4ZDI4Yzc2Y2NiZTU5YWEyYjg4ZmExZGFsP2rI: 00:33:02.962 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:02.962 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:02.962 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzZmOWJhMzk0YTJjYjY1NWFjNTk1ZWYxZDFjZDJhNzkpLKwQ: 00:33:02.962 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2JiNTZmYTE4ZDI4Yzc2Y2NiZTU5YWEyYjg4ZmExZGFsP2rI: ]] 00:33:02.962 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2JiNTZmYTE4ZDI4Yzc2Y2NiZTU5YWEyYjg4ZmExZGFsP2rI: 00:33:02.962 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:33:02.962 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.962 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:02.962 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:02.962 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:02.962 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.962 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:02.962 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.962 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.962 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.962 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.962 09:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.962 09:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.962 09:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.962 09:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.962 09:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.962 09:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.962 09:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.962 09:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.962 09:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.962 09:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.962 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:02.962 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.962 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.221 nvme0n1 00:33:03.221 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.221 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.221 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.221 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.221 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.221 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.221 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.221 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.221 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.221 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.221 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.221 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.221 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:33:03.221 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.221 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:03.221 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:03.221 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:03.221 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDViNDA5NzY3M2Q1MDE2YjA1NWQ2ZTc0YTZiMTNlNjEyZGU3NDZjMzdkMzdkM2I5rKGuKA==: 00:33:03.221 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWIyYjliNGQyZTEwZmI1NTRhYmUyZjUyYTEyN2VmYzIJweoA: 00:33:03.221 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:03.221 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:03.221 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDViNDA5NzY3M2Q1MDE2YjA1NWQ2ZTc0YTZiMTNlNjEyZGU3NDZjMzdkMzdkM2I5rKGuKA==: 00:33:03.221 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWIyYjliNGQyZTEwZmI1NTRhYmUyZjUyYTEyN2VmYzIJweoA: ]] 00:33:03.221 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWIyYjliNGQyZTEwZmI1NTRhYmUyZjUyYTEyN2VmYzIJweoA: 00:33:03.221 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:33:03.221 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.221 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:03.221 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:03.221 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:03.221 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.221 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:03.221 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.221 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.221 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.221 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.221 09:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:03.221 09:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:03.221 09:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:03.221 09:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.221 09:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.221 09:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:03.221 09:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.221 09:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:03.221 09:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:03.221 09:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:03.222 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:03.222 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.222 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.480 nvme0n1 00:33:03.480 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.480 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.480 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.480 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.480 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.480 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.480 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.480 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.480 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.480 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.480 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.480 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.480 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:33:03.480 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.480 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:03.480 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:03.480 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:03.480 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTljNzg0NWZhOWZiN2FkZjc1ZDczMWYzZDU2MjE3YWY3NWE5YmFlNzUzMDhlMDA0ZmRhMTgzNjU3OTc1YzRlNNKnvm8=: 00:33:03.480 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:03.480 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:03.480 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:03.480 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTljNzg0NWZhOWZiN2FkZjc1ZDczMWYzZDU2MjE3YWY3NWE5YmFlNzUzMDhlMDA0ZmRhMTgzNjU3OTc1YzRlNNKnvm8=: 00:33:03.480 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:03.480 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:33:03.480 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.480 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:03.480 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:03.480 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:03.480 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.480 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:03.480 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.480 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.480 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.480 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.480 09:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:03.480 09:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:03.480 09:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:03.480 09:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.480 09:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.480 09:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:03.480 09:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.480 09:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:03.480 09:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:03.480 09:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:03.481 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:03.481 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.481 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.738 nvme0n1 00:33:03.738 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.738 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.738 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.738 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.738 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.738 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.738 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.738 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.738 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.738 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.738 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.738 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:03.738 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.738 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:33:03.738 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.738 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:03.738 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:03.738 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:03.738 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkzNzQ3ZDcxM2YyNDY1MjQxZGQ0YzEzMjJmMTRiMGXzFvzn: 00:33:03.738 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTcxYzk5NmM2MzA0NmY4MzJiMGIwNjdlMTY5Y2IwOGRkZWE0ZmM0NGY1OTk5YjRkOTVjOGM2NzA5MzgzODZjZmQbMFE=: 00:33:03.738 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:03.738 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:03.738 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkzNzQ3ZDcxM2YyNDY1MjQxZGQ0YzEzMjJmMTRiMGXzFvzn: 00:33:03.738 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTcxYzk5NmM2MzA0NmY4MzJiMGIwNjdlMTY5Y2IwOGRkZWE0ZmM0NGY1OTk5YjRkOTVjOGM2NzA5MzgzODZjZmQbMFE=: ]] 00:33:03.738 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTcxYzk5NmM2MzA0NmY4MzJiMGIwNjdlMTY5Y2IwOGRkZWE0ZmM0NGY1OTk5YjRkOTVjOGM2NzA5MzgzODZjZmQbMFE=: 00:33:03.738 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:33:03.738 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.738 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:03.738 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:03.738 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:03.738 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.738 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:03.738 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.738 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.738 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.738 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.738 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:03.738 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:03.738 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:03.738 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.738 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.738 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:03.738 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.738 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:03.738 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:03.738 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:03.738 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:03.738 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.738 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.303 nvme0n1 00:33:04.303 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.303 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.303 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.303 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.303 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.303 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.303 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.303 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.303 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.303 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.303 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.303 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:04.303 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:33:04.303 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.303 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:04.303 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:04.303 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:04.303 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTU0ZjY3ZjJkODc2NmI5ZTg2N2RjZDFmZDkyYjIwZGE5ZDUxYmZmYmQ2NmMxYjcxOuJ+qQ==: 00:33:04.303 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: 00:33:04.303 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:04.303 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:04.303 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTU0ZjY3ZjJkODc2NmI5ZTg2N2RjZDFmZDkyYjIwZGE5ZDUxYmZmYmQ2NmMxYjcxOuJ+qQ==: 00:33:04.303 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: ]] 00:33:04.303 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: 00:33:04.303 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:33:04.303 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:04.303 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:04.303 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:04.303 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:04.303 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:04.303 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:04.303 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.303 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.303 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.303 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:04.303 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.303 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.303 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.303 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.303 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.303 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.303 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.303 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.303 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.303 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.303 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:04.303 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.303 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.561 nvme0n1 00:33:04.561 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.561 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.561 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.561 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.561 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.561 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.561 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.561 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.561 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.561 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.561 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.561 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:04.561 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:33:04.561 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.561 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:04.561 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:04.561 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:04.561 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzZmOWJhMzk0YTJjYjY1NWFjNTk1ZWYxZDFjZDJhNzkpLKwQ: 00:33:04.561 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2JiNTZmYTE4ZDI4Yzc2Y2NiZTU5YWEyYjg4ZmExZGFsP2rI: 00:33:04.561 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:04.561 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:04.561 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzZmOWJhMzk0YTJjYjY1NWFjNTk1ZWYxZDFjZDJhNzkpLKwQ: 00:33:04.561 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2JiNTZmYTE4ZDI4Yzc2Y2NiZTU5YWEyYjg4ZmExZGFsP2rI: ]] 00:33:04.561 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2JiNTZmYTE4ZDI4Yzc2Y2NiZTU5YWEyYjg4ZmExZGFsP2rI: 00:33:04.561 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:33:04.561 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:04.561 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:04.561 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:04.561 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:04.561 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:04.561 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:04.561 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.561 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.561 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.561 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:04.561 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.561 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.561 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.561 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.561 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.561 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.561 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.561 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.561 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.561 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.561 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:04.561 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.561 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.820 nvme0n1 00:33:04.820 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.820 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.820 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.820 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.820 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.820 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.820 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.820 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.820 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.820 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.820 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.820 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:04.820 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:33:04.820 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.820 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:04.820 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:04.820 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:04.820 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDViNDA5NzY3M2Q1MDE2YjA1NWQ2ZTc0YTZiMTNlNjEyZGU3NDZjMzdkMzdkM2I5rKGuKA==: 00:33:04.820 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWIyYjliNGQyZTEwZmI1NTRhYmUyZjUyYTEyN2VmYzIJweoA: 00:33:04.820 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:04.820 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:04.820 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDViNDA5NzY3M2Q1MDE2YjA1NWQ2ZTc0YTZiMTNlNjEyZGU3NDZjMzdkMzdkM2I5rKGuKA==: 00:33:04.820 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWIyYjliNGQyZTEwZmI1NTRhYmUyZjUyYTEyN2VmYzIJweoA: ]] 00:33:04.820 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWIyYjliNGQyZTEwZmI1NTRhYmUyZjUyYTEyN2VmYzIJweoA: 00:33:04.820 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:33:04.820 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:04.820 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:04.820 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:04.820 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:04.820 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:04.820 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:04.820 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.820 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.820 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.820 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:04.820 09:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.820 09:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.820 09:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.820 09:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.820 09:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.820 09:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.820 09:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.820 09:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.820 09:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.820 09:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.820 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:04.820 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.820 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.387 nvme0n1 00:33:05.387 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.387 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:05.387 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.387 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:05.387 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.387 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.387 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:05.387 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:05.387 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.387 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.387 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.387 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:05.387 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:33:05.387 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:05.387 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:05.387 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:05.387 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:05.387 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTljNzg0NWZhOWZiN2FkZjc1ZDczMWYzZDU2MjE3YWY3NWE5YmFlNzUzMDhlMDA0ZmRhMTgzNjU3OTc1YzRlNNKnvm8=: 00:33:05.387 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:05.387 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:05.387 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:05.387 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTljNzg0NWZhOWZiN2FkZjc1ZDczMWYzZDU2MjE3YWY3NWE5YmFlNzUzMDhlMDA0ZmRhMTgzNjU3OTc1YzRlNNKnvm8=: 00:33:05.387 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:05.387 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:33:05.387 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:05.387 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:05.387 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:05.387 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:05.387 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:05.387 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:05.387 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.387 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.387 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.387 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:05.387 09:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:05.387 09:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:05.387 09:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:05.387 09:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:05.387 09:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:05.387 09:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:05.387 09:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:05.387 09:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:05.387 09:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:05.387 09:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:05.387 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:05.387 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.387 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.645 nvme0n1 00:33:05.645 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.645 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:05.645 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.645 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:05.645 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.645 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.645 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:05.645 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:05.645 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.645 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.645 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.645 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:05.645 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:05.645 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:33:05.645 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:05.645 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:05.645 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:05.645 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:05.645 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkzNzQ3ZDcxM2YyNDY1MjQxZGQ0YzEzMjJmMTRiMGXzFvzn: 00:33:05.645 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTcxYzk5NmM2MzA0NmY4MzJiMGIwNjdlMTY5Y2IwOGRkZWE0ZmM0NGY1OTk5YjRkOTVjOGM2NzA5MzgzODZjZmQbMFE=: 00:33:05.645 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:05.645 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:05.645 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkzNzQ3ZDcxM2YyNDY1MjQxZGQ0YzEzMjJmMTRiMGXzFvzn: 00:33:05.645 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTcxYzk5NmM2MzA0NmY4MzJiMGIwNjdlMTY5Y2IwOGRkZWE0ZmM0NGY1OTk5YjRkOTVjOGM2NzA5MzgzODZjZmQbMFE=: ]] 00:33:05.645 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTcxYzk5NmM2MzA0NmY4MzJiMGIwNjdlMTY5Y2IwOGRkZWE0ZmM0NGY1OTk5YjRkOTVjOGM2NzA5MzgzODZjZmQbMFE=: 00:33:05.645 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:33:05.645 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:05.645 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:05.645 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:05.645 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:05.645 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:05.645 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:05.645 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.645 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.645 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.645 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:05.645 09:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:05.645 09:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:05.645 09:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:05.645 09:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:05.645 09:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:05.645 09:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:05.645 09:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:05.645 09:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:05.645 09:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:05.645 09:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:05.645 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:05.645 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.645 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.210 nvme0n1 00:33:06.210 09:42:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.210 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:06.210 09:42:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.210 09:42:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.210 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:06.210 09:42:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.210 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.210 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:06.210 09:42:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.210 09:42:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.210 09:42:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.210 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:06.210 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:33:06.210 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:06.210 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:06.210 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:06.210 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:06.210 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTU0ZjY3ZjJkODc2NmI5ZTg2N2RjZDFmZDkyYjIwZGE5ZDUxYmZmYmQ2NmMxYjcxOuJ+qQ==: 00:33:06.210 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: 00:33:06.210 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:06.210 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:06.210 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTU0ZjY3ZjJkODc2NmI5ZTg2N2RjZDFmZDkyYjIwZGE5ZDUxYmZmYmQ2NmMxYjcxOuJ+qQ==: 00:33:06.210 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: ]] 00:33:06.210 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: 00:33:06.210 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:33:06.210 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:06.211 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:06.211 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:06.211 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:06.211 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:06.211 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:06.211 09:42:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.211 09:42:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.211 09:42:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.211 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:06.211 09:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:06.211 09:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:06.211 09:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:06.211 09:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:06.211 09:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:06.211 09:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:06.211 09:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:06.211 09:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:06.211 09:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:06.211 09:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:06.211 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:06.211 09:42:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.211 09:42:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.776 nvme0n1 00:33:06.776 09:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.776 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:06.776 09:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.776 09:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.776 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:06.776 09:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.776 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.776 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:06.776 09:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.776 09:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.776 09:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.776 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:06.776 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:33:06.776 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:06.776 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:06.776 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:06.776 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:06.776 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzZmOWJhMzk0YTJjYjY1NWFjNTk1ZWYxZDFjZDJhNzkpLKwQ: 00:33:06.776 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2JiNTZmYTE4ZDI4Yzc2Y2NiZTU5YWEyYjg4ZmExZGFsP2rI: 00:33:06.776 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:06.776 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:06.776 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzZmOWJhMzk0YTJjYjY1NWFjNTk1ZWYxZDFjZDJhNzkpLKwQ: 00:33:06.776 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2JiNTZmYTE4ZDI4Yzc2Y2NiZTU5YWEyYjg4ZmExZGFsP2rI: ]] 00:33:06.776 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2JiNTZmYTE4ZDI4Yzc2Y2NiZTU5YWEyYjg4ZmExZGFsP2rI: 00:33:06.776 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:33:06.776 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:06.776 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:06.776 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:06.776 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:06.776 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:06.776 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:06.776 09:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.776 09:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.776 09:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.776 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:06.776 09:42:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:06.776 09:42:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:06.776 09:42:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:06.776 09:42:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:06.776 09:42:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:06.776 09:42:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:06.776 09:42:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:06.776 09:42:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:06.776 09:42:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:06.776 09:42:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:06.776 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:06.776 09:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.776 09:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.341 nvme0n1 00:33:07.341 09:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.341 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.341 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.341 09:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.341 09:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.341 09:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.341 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.341 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.341 09:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.341 09:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.341 09:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.341 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.341 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:33:07.341 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.341 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:07.341 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:07.341 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:07.341 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDViNDA5NzY3M2Q1MDE2YjA1NWQ2ZTc0YTZiMTNlNjEyZGU3NDZjMzdkMzdkM2I5rKGuKA==: 00:33:07.341 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWIyYjliNGQyZTEwZmI1NTRhYmUyZjUyYTEyN2VmYzIJweoA: 00:33:07.341 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:07.341 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:07.341 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDViNDA5NzY3M2Q1MDE2YjA1NWQ2ZTc0YTZiMTNlNjEyZGU3NDZjMzdkMzdkM2I5rKGuKA==: 00:33:07.341 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWIyYjliNGQyZTEwZmI1NTRhYmUyZjUyYTEyN2VmYzIJweoA: ]] 00:33:07.341 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWIyYjliNGQyZTEwZmI1NTRhYmUyZjUyYTEyN2VmYzIJweoA: 00:33:07.341 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:33:07.341 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.341 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:07.341 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:07.341 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:07.341 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.341 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:07.341 09:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.341 09:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.341 09:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.341 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.341 09:42:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:07.341 09:42:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:07.341 09:42:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:07.341 09:42:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.341 09:42:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.341 09:42:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:07.341 09:42:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.341 09:42:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:07.341 09:42:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:07.341 09:42:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:07.341 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:07.341 09:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.341 09:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.907 nvme0n1 00:33:07.907 09:42:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.907 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.907 09:42:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.907 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.907 09:42:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.907 09:42:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.907 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.907 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.907 09:42:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.907 09:42:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.907 09:42:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.907 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.907 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:33:07.907 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.907 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:07.907 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:07.907 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:07.907 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTljNzg0NWZhOWZiN2FkZjc1ZDczMWYzZDU2MjE3YWY3NWE5YmFlNzUzMDhlMDA0ZmRhMTgzNjU3OTc1YzRlNNKnvm8=: 00:33:07.907 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:07.907 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:07.907 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:07.907 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTljNzg0NWZhOWZiN2FkZjc1ZDczMWYzZDU2MjE3YWY3NWE5YmFlNzUzMDhlMDA0ZmRhMTgzNjU3OTc1YzRlNNKnvm8=: 00:33:07.907 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:07.907 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:33:07.907 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.907 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:07.907 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:07.907 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:07.907 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.907 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:07.907 09:42:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.907 09:42:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.907 09:42:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.907 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.907 09:42:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:07.907 09:42:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:07.907 09:42:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:07.907 09:42:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.907 09:42:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.907 09:42:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:07.907 09:42:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.907 09:42:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:07.907 09:42:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:07.907 09:42:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:07.907 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:07.907 09:42:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.907 09:42:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.472 nvme0n1 00:33:08.472 09:42:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.472 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:08.472 09:42:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.472 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:08.472 09:42:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.473 09:42:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.473 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:08.473 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:08.473 09:42:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.473 09:42:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.730 09:42:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.730 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:08.730 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:08.730 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:33:08.730 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:08.730 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:08.730 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:08.730 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:08.730 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkzNzQ3ZDcxM2YyNDY1MjQxZGQ0YzEzMjJmMTRiMGXzFvzn: 00:33:08.730 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTcxYzk5NmM2MzA0NmY4MzJiMGIwNjdlMTY5Y2IwOGRkZWE0ZmM0NGY1OTk5YjRkOTVjOGM2NzA5MzgzODZjZmQbMFE=: 00:33:08.730 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:08.730 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:08.730 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkzNzQ3ZDcxM2YyNDY1MjQxZGQ0YzEzMjJmMTRiMGXzFvzn: 00:33:08.730 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTcxYzk5NmM2MzA0NmY4MzJiMGIwNjdlMTY5Y2IwOGRkZWE0ZmM0NGY1OTk5YjRkOTVjOGM2NzA5MzgzODZjZmQbMFE=: ]] 00:33:08.730 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTcxYzk5NmM2MzA0NmY4MzJiMGIwNjdlMTY5Y2IwOGRkZWE0ZmM0NGY1OTk5YjRkOTVjOGM2NzA5MzgzODZjZmQbMFE=: 00:33:08.730 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:33:08.730 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:08.730 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:08.730 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:08.730 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:08.730 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:08.730 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:08.730 09:42:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.730 09:42:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.730 09:42:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.730 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:08.730 09:42:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:08.730 09:42:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:08.730 09:42:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:08.730 09:42:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:08.730 09:42:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:08.730 09:42:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:08.730 09:42:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:08.730 09:42:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:08.730 09:42:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:08.730 09:42:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:08.730 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:08.730 09:42:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.730 09:42:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.660 nvme0n1 00:33:09.660 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.660 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.660 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:09.660 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.660 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.660 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.660 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.660 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:09.660 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.660 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.660 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.660 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:09.660 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:33:09.660 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:09.660 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:09.660 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:09.660 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:09.660 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTU0ZjY3ZjJkODc2NmI5ZTg2N2RjZDFmZDkyYjIwZGE5ZDUxYmZmYmQ2NmMxYjcxOuJ+qQ==: 00:33:09.660 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: 00:33:09.660 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:09.660 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:09.660 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTU0ZjY3ZjJkODc2NmI5ZTg2N2RjZDFmZDkyYjIwZGE5ZDUxYmZmYmQ2NmMxYjcxOuJ+qQ==: 00:33:09.660 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: ]] 00:33:09.660 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: 00:33:09.660 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:33:09.660 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:09.660 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:09.660 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:09.660 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:09.660 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:09.660 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:09.660 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.660 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.660 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.660 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:09.660 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:09.660 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:09.660 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:09.660 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:09.660 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:09.660 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:09.660 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:09.660 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:09.660 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:09.660 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:09.660 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:09.660 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.660 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.624 nvme0n1 00:33:10.624 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.624 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:10.624 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.624 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.624 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:10.624 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.624 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:10.624 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:10.624 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.624 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.624 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.624 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:10.624 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:33:10.624 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:10.624 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:10.624 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:10.624 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:10.624 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzZmOWJhMzk0YTJjYjY1NWFjNTk1ZWYxZDFjZDJhNzkpLKwQ: 00:33:10.624 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2JiNTZmYTE4ZDI4Yzc2Y2NiZTU5YWEyYjg4ZmExZGFsP2rI: 00:33:10.624 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:10.624 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:10.624 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzZmOWJhMzk0YTJjYjY1NWFjNTk1ZWYxZDFjZDJhNzkpLKwQ: 00:33:10.624 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2JiNTZmYTE4ZDI4Yzc2Y2NiZTU5YWEyYjg4ZmExZGFsP2rI: ]] 00:33:10.624 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2JiNTZmYTE4ZDI4Yzc2Y2NiZTU5YWEyYjg4ZmExZGFsP2rI: 00:33:10.624 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:33:10.624 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:10.624 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:10.624 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:10.624 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:10.624 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:10.624 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:10.624 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.624 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.624 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.624 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:10.624 09:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:10.624 09:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:10.624 09:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:10.624 09:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:10.624 09:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:10.624 09:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:10.624 09:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:10.624 09:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:10.624 09:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:10.624 09:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:10.624 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:10.624 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.624 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.995 nvme0n1 00:33:11.995 09:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.995 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:11.995 09:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.995 09:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.995 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:11.995 09:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.995 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:11.995 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:11.995 09:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.995 09:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.995 09:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.995 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:11.995 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:33:11.995 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:11.995 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:11.995 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:11.995 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:11.995 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDViNDA5NzY3M2Q1MDE2YjA1NWQ2ZTc0YTZiMTNlNjEyZGU3NDZjMzdkMzdkM2I5rKGuKA==: 00:33:11.995 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWIyYjliNGQyZTEwZmI1NTRhYmUyZjUyYTEyN2VmYzIJweoA: 00:33:11.995 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:11.995 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:11.995 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDViNDA5NzY3M2Q1MDE2YjA1NWQ2ZTc0YTZiMTNlNjEyZGU3NDZjMzdkMzdkM2I5rKGuKA==: 00:33:11.995 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWIyYjliNGQyZTEwZmI1NTRhYmUyZjUyYTEyN2VmYzIJweoA: ]] 00:33:11.995 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWIyYjliNGQyZTEwZmI1NTRhYmUyZjUyYTEyN2VmYzIJweoA: 00:33:11.995 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:33:11.995 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:11.995 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:11.995 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:11.995 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:11.995 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:11.996 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:11.996 09:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.996 09:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.996 09:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.996 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:11.996 09:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:11.996 09:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:11.996 09:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:11.996 09:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:11.996 09:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:11.996 09:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:11.996 09:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:11.996 09:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:11.996 09:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:11.996 09:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:11.996 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:11.996 09:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.996 09:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.927 nvme0n1 00:33:12.927 09:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.927 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:12.927 09:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.927 09:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.927 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:12.927 09:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.927 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:12.927 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:12.927 09:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.927 09:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.927 09:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.927 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:12.927 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:33:12.927 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:12.927 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:12.927 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:12.927 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:12.927 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTljNzg0NWZhOWZiN2FkZjc1ZDczMWYzZDU2MjE3YWY3NWE5YmFlNzUzMDhlMDA0ZmRhMTgzNjU3OTc1YzRlNNKnvm8=: 00:33:12.927 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:12.927 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:12.927 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:12.927 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTljNzg0NWZhOWZiN2FkZjc1ZDczMWYzZDU2MjE3YWY3NWE5YmFlNzUzMDhlMDA0ZmRhMTgzNjU3OTc1YzRlNNKnvm8=: 00:33:12.927 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:12.927 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:33:12.927 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:12.928 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:12.928 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:12.928 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:12.928 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:12.928 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:12.928 09:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.928 09:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.928 09:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.928 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:12.928 09:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:12.928 09:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:12.928 09:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:12.928 09:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:12.928 09:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:12.928 09:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:12.928 09:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:12.928 09:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:12.928 09:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:12.928 09:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:12.928 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:12.928 09:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.928 09:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.860 nvme0n1 00:33:13.860 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.860 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:13.860 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:13.860 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.860 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.860 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.860 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:13.861 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:13.861 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.861 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.861 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.861 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:13.861 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:13.861 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:13.861 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:33:13.861 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:13.861 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:13.861 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:13.861 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:13.861 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkzNzQ3ZDcxM2YyNDY1MjQxZGQ0YzEzMjJmMTRiMGXzFvzn: 00:33:13.861 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTcxYzk5NmM2MzA0NmY4MzJiMGIwNjdlMTY5Y2IwOGRkZWE0ZmM0NGY1OTk5YjRkOTVjOGM2NzA5MzgzODZjZmQbMFE=: 00:33:13.861 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:13.861 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:13.861 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkzNzQ3ZDcxM2YyNDY1MjQxZGQ0YzEzMjJmMTRiMGXzFvzn: 00:33:13.861 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTcxYzk5NmM2MzA0NmY4MzJiMGIwNjdlMTY5Y2IwOGRkZWE0ZmM0NGY1OTk5YjRkOTVjOGM2NzA5MzgzODZjZmQbMFE=: ]] 00:33:13.861 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTcxYzk5NmM2MzA0NmY4MzJiMGIwNjdlMTY5Y2IwOGRkZWE0ZmM0NGY1OTk5YjRkOTVjOGM2NzA5MzgzODZjZmQbMFE=: 00:33:13.861 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:33:13.861 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:13.861 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:13.861 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:13.861 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:13.861 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:13.861 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:13.861 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.861 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.861 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.861 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:13.861 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:13.861 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:13.861 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:13.861 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:13.861 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:13.861 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:13.861 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:13.861 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:13.861 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:13.861 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:13.861 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:13.861 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.861 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.861 nvme0n1 00:33:13.861 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.861 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:13.861 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:13.861 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.861 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.861 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.119 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:14.119 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:14.119 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.119 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.119 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.119 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:14.119 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:33:14.119 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:14.119 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:14.119 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:14.119 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:14.119 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTU0ZjY3ZjJkODc2NmI5ZTg2N2RjZDFmZDkyYjIwZGE5ZDUxYmZmYmQ2NmMxYjcxOuJ+qQ==: 00:33:14.119 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: 00:33:14.119 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:14.119 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:14.119 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTU0ZjY3ZjJkODc2NmI5ZTg2N2RjZDFmZDkyYjIwZGE5ZDUxYmZmYmQ2NmMxYjcxOuJ+qQ==: 00:33:14.119 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: ]] 00:33:14.119 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: 00:33:14.119 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:33:14.119 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:14.119 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:14.119 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:14.119 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:14.119 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:14.119 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:14.119 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.119 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.119 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.119 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:14.119 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:14.119 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:14.119 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:14.119 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:14.119 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:14.119 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:14.119 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:14.119 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:14.119 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:14.119 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:14.119 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:14.119 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.119 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.119 nvme0n1 00:33:14.119 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.119 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:14.120 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.120 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.120 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:14.120 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.120 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:14.120 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:14.120 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.120 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.378 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.378 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:14.378 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:33:14.378 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:14.378 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:14.378 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:14.378 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:14.378 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzZmOWJhMzk0YTJjYjY1NWFjNTk1ZWYxZDFjZDJhNzkpLKwQ: 00:33:14.378 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2JiNTZmYTE4ZDI4Yzc2Y2NiZTU5YWEyYjg4ZmExZGFsP2rI: 00:33:14.378 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:14.378 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:14.378 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzZmOWJhMzk0YTJjYjY1NWFjNTk1ZWYxZDFjZDJhNzkpLKwQ: 00:33:14.378 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2JiNTZmYTE4ZDI4Yzc2Y2NiZTU5YWEyYjg4ZmExZGFsP2rI: ]] 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2JiNTZmYTE4ZDI4Yzc2Y2NiZTU5YWEyYjg4ZmExZGFsP2rI: 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.379 nvme0n1 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDViNDA5NzY3M2Q1MDE2YjA1NWQ2ZTc0YTZiMTNlNjEyZGU3NDZjMzdkMzdkM2I5rKGuKA==: 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWIyYjliNGQyZTEwZmI1NTRhYmUyZjUyYTEyN2VmYzIJweoA: 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDViNDA5NzY3M2Q1MDE2YjA1NWQ2ZTc0YTZiMTNlNjEyZGU3NDZjMzdkMzdkM2I5rKGuKA==: 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWIyYjliNGQyZTEwZmI1NTRhYmUyZjUyYTEyN2VmYzIJweoA: ]] 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWIyYjliNGQyZTEwZmI1NTRhYmUyZjUyYTEyN2VmYzIJweoA: 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.379 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.638 nvme0n1 00:33:14.638 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.638 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:14.638 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.638 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.638 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:14.638 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.638 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:14.638 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:14.638 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.638 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.638 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.638 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:14.638 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:33:14.638 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:14.638 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:14.638 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:14.638 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:14.638 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTljNzg0NWZhOWZiN2FkZjc1ZDczMWYzZDU2MjE3YWY3NWE5YmFlNzUzMDhlMDA0ZmRhMTgzNjU3OTc1YzRlNNKnvm8=: 00:33:14.638 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:14.638 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:14.638 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:14.638 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTljNzg0NWZhOWZiN2FkZjc1ZDczMWYzZDU2MjE3YWY3NWE5YmFlNzUzMDhlMDA0ZmRhMTgzNjU3OTc1YzRlNNKnvm8=: 00:33:14.638 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:14.638 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:33:14.638 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:14.638 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:14.638 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:14.638 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:14.638 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:14.638 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:14.638 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.638 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.638 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.638 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:14.638 09:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:14.638 09:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:14.638 09:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:14.638 09:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:14.638 09:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:14.638 09:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:14.638 09:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:14.638 09:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:14.638 09:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:14.638 09:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:14.638 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:14.638 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.638 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.897 nvme0n1 00:33:14.897 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.897 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:14.897 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.897 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.897 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:14.897 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.897 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:14.897 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:14.897 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.897 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.897 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.897 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:14.897 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:14.897 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:33:14.897 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:14.897 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:14.897 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:14.897 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:14.897 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkzNzQ3ZDcxM2YyNDY1MjQxZGQ0YzEzMjJmMTRiMGXzFvzn: 00:33:14.897 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTcxYzk5NmM2MzA0NmY4MzJiMGIwNjdlMTY5Y2IwOGRkZWE0ZmM0NGY1OTk5YjRkOTVjOGM2NzA5MzgzODZjZmQbMFE=: 00:33:14.897 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:14.897 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:14.897 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkzNzQ3ZDcxM2YyNDY1MjQxZGQ0YzEzMjJmMTRiMGXzFvzn: 00:33:14.897 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTcxYzk5NmM2MzA0NmY4MzJiMGIwNjdlMTY5Y2IwOGRkZWE0ZmM0NGY1OTk5YjRkOTVjOGM2NzA5MzgzODZjZmQbMFE=: ]] 00:33:14.897 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTcxYzk5NmM2MzA0NmY4MzJiMGIwNjdlMTY5Y2IwOGRkZWE0ZmM0NGY1OTk5YjRkOTVjOGM2NzA5MzgzODZjZmQbMFE=: 00:33:14.897 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:33:14.897 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:14.897 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:14.897 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:14.897 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:14.897 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:14.897 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:14.897 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.897 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.897 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.897 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:14.897 09:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:14.897 09:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:14.897 09:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:14.897 09:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:14.897 09:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:14.897 09:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:14.897 09:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:14.897 09:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:14.897 09:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:14.897 09:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:14.897 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:14.897 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.897 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.155 nvme0n1 00:33:15.155 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.155 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:15.155 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.155 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.155 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:15.155 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.155 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:15.155 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:15.155 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.155 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.155 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.155 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:15.155 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:33:15.155 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:15.155 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:15.155 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:15.155 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:15.156 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTU0ZjY3ZjJkODc2NmI5ZTg2N2RjZDFmZDkyYjIwZGE5ZDUxYmZmYmQ2NmMxYjcxOuJ+qQ==: 00:33:15.156 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: 00:33:15.156 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:15.156 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:15.156 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTU0ZjY3ZjJkODc2NmI5ZTg2N2RjZDFmZDkyYjIwZGE5ZDUxYmZmYmQ2NmMxYjcxOuJ+qQ==: 00:33:15.156 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: ]] 00:33:15.156 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: 00:33:15.156 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:33:15.156 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:15.156 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:15.156 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:15.156 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:15.156 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:15.156 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:15.156 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.156 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.156 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.156 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:15.156 09:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:15.156 09:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:15.156 09:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:15.156 09:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:15.156 09:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:15.156 09:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:15.156 09:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:15.156 09:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:15.156 09:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:15.156 09:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:15.156 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:15.156 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.156 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.414 nvme0n1 00:33:15.414 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.414 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:15.414 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.414 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.414 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:15.414 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.414 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:15.414 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:15.414 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.414 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.414 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.414 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:15.414 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:33:15.414 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:15.414 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:15.414 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:15.414 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:15.415 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzZmOWJhMzk0YTJjYjY1NWFjNTk1ZWYxZDFjZDJhNzkpLKwQ: 00:33:15.415 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2JiNTZmYTE4ZDI4Yzc2Y2NiZTU5YWEyYjg4ZmExZGFsP2rI: 00:33:15.415 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:15.415 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:15.415 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzZmOWJhMzk0YTJjYjY1NWFjNTk1ZWYxZDFjZDJhNzkpLKwQ: 00:33:15.415 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2JiNTZmYTE4ZDI4Yzc2Y2NiZTU5YWEyYjg4ZmExZGFsP2rI: ]] 00:33:15.415 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2JiNTZmYTE4ZDI4Yzc2Y2NiZTU5YWEyYjg4ZmExZGFsP2rI: 00:33:15.415 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:33:15.415 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:15.415 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:15.415 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:15.415 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:15.415 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:15.415 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:15.415 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.415 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.415 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.415 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:15.415 09:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:15.415 09:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:15.415 09:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:15.415 09:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:15.415 09:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:15.415 09:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:15.415 09:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:15.415 09:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:15.415 09:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:15.415 09:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:15.415 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:15.415 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.415 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.673 nvme0n1 00:33:15.673 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.673 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:15.674 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:15.674 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.674 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.674 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.674 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:15.674 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:15.674 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.674 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.674 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.674 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:15.674 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:33:15.674 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:15.674 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:15.674 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:15.674 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:15.674 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDViNDA5NzY3M2Q1MDE2YjA1NWQ2ZTc0YTZiMTNlNjEyZGU3NDZjMzdkMzdkM2I5rKGuKA==: 00:33:15.674 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWIyYjliNGQyZTEwZmI1NTRhYmUyZjUyYTEyN2VmYzIJweoA: 00:33:15.674 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:15.674 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:15.674 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDViNDA5NzY3M2Q1MDE2YjA1NWQ2ZTc0YTZiMTNlNjEyZGU3NDZjMzdkMzdkM2I5rKGuKA==: 00:33:15.674 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWIyYjliNGQyZTEwZmI1NTRhYmUyZjUyYTEyN2VmYzIJweoA: ]] 00:33:15.674 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWIyYjliNGQyZTEwZmI1NTRhYmUyZjUyYTEyN2VmYzIJweoA: 00:33:15.674 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:33:15.674 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:15.674 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:15.674 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:15.674 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:15.674 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:15.674 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:15.674 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.674 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.674 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.674 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:15.674 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:15.674 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:15.674 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:15.674 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:15.674 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:15.674 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:15.674 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:15.674 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:15.674 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:15.674 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:15.674 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:15.674 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.674 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.932 nvme0n1 00:33:15.932 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.932 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:15.932 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.932 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:15.932 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.932 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.932 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:15.932 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:15.932 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.932 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.932 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.932 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:15.932 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:33:15.932 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:15.932 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:15.932 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:15.932 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:15.932 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTljNzg0NWZhOWZiN2FkZjc1ZDczMWYzZDU2MjE3YWY3NWE5YmFlNzUzMDhlMDA0ZmRhMTgzNjU3OTc1YzRlNNKnvm8=: 00:33:15.932 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:15.932 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:15.932 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:15.932 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTljNzg0NWZhOWZiN2FkZjc1ZDczMWYzZDU2MjE3YWY3NWE5YmFlNzUzMDhlMDA0ZmRhMTgzNjU3OTc1YzRlNNKnvm8=: 00:33:15.932 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:15.932 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:33:15.932 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:15.932 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:15.932 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:15.932 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:15.932 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:15.932 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:15.932 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.932 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.932 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.932 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:15.932 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:15.932 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:15.932 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:15.932 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:15.932 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:15.932 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:15.932 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:15.932 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:15.932 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:15.932 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:15.932 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:15.932 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.932 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.190 nvme0n1 00:33:16.190 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.190 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:16.190 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:16.190 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.190 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.190 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.190 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:16.190 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:16.190 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.190 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.190 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.191 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:16.191 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:16.191 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:33:16.191 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:16.191 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:16.191 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:16.191 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:16.191 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkzNzQ3ZDcxM2YyNDY1MjQxZGQ0YzEzMjJmMTRiMGXzFvzn: 00:33:16.191 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTcxYzk5NmM2MzA0NmY4MzJiMGIwNjdlMTY5Y2IwOGRkZWE0ZmM0NGY1OTk5YjRkOTVjOGM2NzA5MzgzODZjZmQbMFE=: 00:33:16.191 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:16.191 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:16.191 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkzNzQ3ZDcxM2YyNDY1MjQxZGQ0YzEzMjJmMTRiMGXzFvzn: 00:33:16.191 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTcxYzk5NmM2MzA0NmY4MzJiMGIwNjdlMTY5Y2IwOGRkZWE0ZmM0NGY1OTk5YjRkOTVjOGM2NzA5MzgzODZjZmQbMFE=: ]] 00:33:16.191 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTcxYzk5NmM2MzA0NmY4MzJiMGIwNjdlMTY5Y2IwOGRkZWE0ZmM0NGY1OTk5YjRkOTVjOGM2NzA5MzgzODZjZmQbMFE=: 00:33:16.191 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:33:16.191 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:16.191 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:16.191 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:16.191 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:16.191 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:16.191 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:16.191 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.191 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.191 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.191 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:16.191 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:16.191 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:16.191 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:16.191 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:16.191 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:16.191 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:16.191 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:16.191 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:16.191 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:16.191 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:16.191 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:16.191 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.191 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.449 nvme0n1 00:33:16.449 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.449 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:16.449 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.449 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:16.449 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.449 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.707 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:16.707 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:16.707 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.707 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.707 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.707 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:16.707 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:33:16.707 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:16.707 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:16.707 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:16.707 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:16.707 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTU0ZjY3ZjJkODc2NmI5ZTg2N2RjZDFmZDkyYjIwZGE5ZDUxYmZmYmQ2NmMxYjcxOuJ+qQ==: 00:33:16.707 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: 00:33:16.707 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:16.707 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:16.707 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTU0ZjY3ZjJkODc2NmI5ZTg2N2RjZDFmZDkyYjIwZGE5ZDUxYmZmYmQ2NmMxYjcxOuJ+qQ==: 00:33:16.707 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: ]] 00:33:16.707 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: 00:33:16.707 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:33:16.707 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:16.707 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:16.707 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:16.707 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:16.707 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:16.707 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:16.707 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.707 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.707 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.707 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:16.707 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:16.707 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:16.707 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:16.707 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:16.707 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:16.707 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:16.707 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:16.707 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:16.707 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:16.707 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:16.707 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:16.707 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.707 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.966 nvme0n1 00:33:16.966 09:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.966 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:16.966 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:16.966 09:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.966 09:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.966 09:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.966 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:16.966 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:16.966 09:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.966 09:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.966 09:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.966 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:16.966 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:33:16.966 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:16.966 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:16.966 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:16.966 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:16.966 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzZmOWJhMzk0YTJjYjY1NWFjNTk1ZWYxZDFjZDJhNzkpLKwQ: 00:33:16.966 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2JiNTZmYTE4ZDI4Yzc2Y2NiZTU5YWEyYjg4ZmExZGFsP2rI: 00:33:16.966 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:16.966 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:16.966 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzZmOWJhMzk0YTJjYjY1NWFjNTk1ZWYxZDFjZDJhNzkpLKwQ: 00:33:16.966 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2JiNTZmYTE4ZDI4Yzc2Y2NiZTU5YWEyYjg4ZmExZGFsP2rI: ]] 00:33:16.966 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2JiNTZmYTE4ZDI4Yzc2Y2NiZTU5YWEyYjg4ZmExZGFsP2rI: 00:33:16.966 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:33:16.966 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:16.966 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:16.966 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:16.966 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:16.966 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:16.966 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:16.966 09:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.966 09:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.966 09:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.966 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:16.966 09:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:16.966 09:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:16.966 09:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:16.966 09:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:16.966 09:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:16.966 09:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:16.966 09:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:16.966 09:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:16.966 09:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:16.966 09:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:16.966 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:16.966 09:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.966 09:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.224 nvme0n1 00:33:17.224 09:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.224 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:17.224 09:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.224 09:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.224 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:17.224 09:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.224 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:17.224 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:17.224 09:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.224 09:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.225 09:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.225 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:17.225 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:33:17.225 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:17.225 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:17.225 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:17.225 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:17.225 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDViNDA5NzY3M2Q1MDE2YjA1NWQ2ZTc0YTZiMTNlNjEyZGU3NDZjMzdkMzdkM2I5rKGuKA==: 00:33:17.225 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWIyYjliNGQyZTEwZmI1NTRhYmUyZjUyYTEyN2VmYzIJweoA: 00:33:17.225 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:17.225 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:17.225 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDViNDA5NzY3M2Q1MDE2YjA1NWQ2ZTc0YTZiMTNlNjEyZGU3NDZjMzdkMzdkM2I5rKGuKA==: 00:33:17.225 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWIyYjliNGQyZTEwZmI1NTRhYmUyZjUyYTEyN2VmYzIJweoA: ]] 00:33:17.225 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWIyYjliNGQyZTEwZmI1NTRhYmUyZjUyYTEyN2VmYzIJweoA: 00:33:17.225 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:33:17.225 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:17.225 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:17.225 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:17.483 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:17.483 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:17.483 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:17.483 09:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.483 09:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.483 09:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.483 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:17.483 09:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:17.483 09:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:17.483 09:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:17.483 09:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:17.483 09:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:17.483 09:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:17.483 09:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:17.483 09:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:17.483 09:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:17.483 09:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:17.483 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:17.483 09:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.483 09:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.742 nvme0n1 00:33:17.742 09:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.742 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:17.742 09:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.742 09:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.742 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:17.742 09:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.742 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:17.742 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:17.742 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.742 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.742 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.742 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:17.742 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:33:17.742 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:17.742 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:17.742 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:17.742 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:17.742 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTljNzg0NWZhOWZiN2FkZjc1ZDczMWYzZDU2MjE3YWY3NWE5YmFlNzUzMDhlMDA0ZmRhMTgzNjU3OTc1YzRlNNKnvm8=: 00:33:17.742 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:17.742 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:17.742 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:17.742 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTljNzg0NWZhOWZiN2FkZjc1ZDczMWYzZDU2MjE3YWY3NWE5YmFlNzUzMDhlMDA0ZmRhMTgzNjU3OTc1YzRlNNKnvm8=: 00:33:17.742 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:17.742 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:33:17.742 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:17.742 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:17.742 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:17.742 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:17.742 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:17.742 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:17.742 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.742 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.742 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.742 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:17.742 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:17.742 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:17.742 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:17.742 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:17.742 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:17.742 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:17.742 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:17.742 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:17.742 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:17.742 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:17.742 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:17.742 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.742 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.000 nvme0n1 00:33:18.000 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.000 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:18.000 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.000 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:18.000 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.001 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.001 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:18.001 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:18.001 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.001 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.001 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.001 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:18.001 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:18.001 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:33:18.001 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:18.001 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:18.001 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:18.001 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:18.001 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkzNzQ3ZDcxM2YyNDY1MjQxZGQ0YzEzMjJmMTRiMGXzFvzn: 00:33:18.001 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTcxYzk5NmM2MzA0NmY4MzJiMGIwNjdlMTY5Y2IwOGRkZWE0ZmM0NGY1OTk5YjRkOTVjOGM2NzA5MzgzODZjZmQbMFE=: 00:33:18.001 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:18.001 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:18.001 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkzNzQ3ZDcxM2YyNDY1MjQxZGQ0YzEzMjJmMTRiMGXzFvzn: 00:33:18.001 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTcxYzk5NmM2MzA0NmY4MzJiMGIwNjdlMTY5Y2IwOGRkZWE0ZmM0NGY1OTk5YjRkOTVjOGM2NzA5MzgzODZjZmQbMFE=: ]] 00:33:18.001 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTcxYzk5NmM2MzA0NmY4MzJiMGIwNjdlMTY5Y2IwOGRkZWE0ZmM0NGY1OTk5YjRkOTVjOGM2NzA5MzgzODZjZmQbMFE=: 00:33:18.001 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:33:18.001 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:18.001 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:18.001 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:18.001 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:18.001 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:18.001 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:18.001 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.001 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.001 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.001 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:18.001 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:18.001 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:18.001 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:18.001 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:18.001 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:18.001 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:18.001 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:18.001 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:18.001 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:18.001 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:18.001 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:18.001 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.001 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.567 nvme0n1 00:33:18.567 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.567 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:18.567 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.567 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.567 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:18.567 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.567 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:18.567 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:18.567 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.567 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.567 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.567 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:18.567 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:33:18.567 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:18.567 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:18.567 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:18.567 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:18.567 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTU0ZjY3ZjJkODc2NmI5ZTg2N2RjZDFmZDkyYjIwZGE5ZDUxYmZmYmQ2NmMxYjcxOuJ+qQ==: 00:33:18.567 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: 00:33:18.567 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:18.567 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:18.567 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTU0ZjY3ZjJkODc2NmI5ZTg2N2RjZDFmZDkyYjIwZGE5ZDUxYmZmYmQ2NmMxYjcxOuJ+qQ==: 00:33:18.567 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: ]] 00:33:18.567 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: 00:33:18.567 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:33:18.567 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:18.567 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:18.567 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:18.567 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:18.567 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:18.567 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:18.567 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.567 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.567 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.567 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:18.567 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:18.567 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:18.567 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:18.567 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:18.567 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:18.567 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:18.567 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:18.567 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:18.567 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:18.567 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:18.567 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:18.567 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.567 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.132 nvme0n1 00:33:19.132 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.132 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:19.132 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.132 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.132 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:19.132 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.389 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:19.389 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:19.389 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.389 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.389 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.389 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:19.389 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:33:19.389 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:19.389 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:19.389 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:19.389 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:19.389 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzZmOWJhMzk0YTJjYjY1NWFjNTk1ZWYxZDFjZDJhNzkpLKwQ: 00:33:19.389 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2JiNTZmYTE4ZDI4Yzc2Y2NiZTU5YWEyYjg4ZmExZGFsP2rI: 00:33:19.389 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:19.389 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:19.389 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzZmOWJhMzk0YTJjYjY1NWFjNTk1ZWYxZDFjZDJhNzkpLKwQ: 00:33:19.389 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2JiNTZmYTE4ZDI4Yzc2Y2NiZTU5YWEyYjg4ZmExZGFsP2rI: ]] 00:33:19.389 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2JiNTZmYTE4ZDI4Yzc2Y2NiZTU5YWEyYjg4ZmExZGFsP2rI: 00:33:19.389 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:33:19.389 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:19.389 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:19.389 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:19.389 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:19.389 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:19.389 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:19.389 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.389 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.389 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.389 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:19.389 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:19.389 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:19.389 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:19.389 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:19.389 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:19.389 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:19.389 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:19.389 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:19.389 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:19.389 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:19.389 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:19.389 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.389 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.954 nvme0n1 00:33:19.954 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.954 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:19.954 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:19.954 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.954 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.954 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.954 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:19.954 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:19.954 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.954 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.954 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.954 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:19.954 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:33:19.954 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:19.954 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:19.954 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:19.954 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:19.954 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDViNDA5NzY3M2Q1MDE2YjA1NWQ2ZTc0YTZiMTNlNjEyZGU3NDZjMzdkMzdkM2I5rKGuKA==: 00:33:19.954 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWIyYjliNGQyZTEwZmI1NTRhYmUyZjUyYTEyN2VmYzIJweoA: 00:33:19.954 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:19.954 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:19.954 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDViNDA5NzY3M2Q1MDE2YjA1NWQ2ZTc0YTZiMTNlNjEyZGU3NDZjMzdkMzdkM2I5rKGuKA==: 00:33:19.954 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWIyYjliNGQyZTEwZmI1NTRhYmUyZjUyYTEyN2VmYzIJweoA: ]] 00:33:19.954 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWIyYjliNGQyZTEwZmI1NTRhYmUyZjUyYTEyN2VmYzIJweoA: 00:33:19.954 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:33:19.954 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:19.954 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:19.954 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:19.954 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:19.954 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:19.954 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:19.954 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.954 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.954 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.954 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:19.954 09:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:19.954 09:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:19.954 09:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:19.954 09:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:19.954 09:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:19.954 09:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:19.954 09:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:19.954 09:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:19.954 09:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:19.954 09:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:19.954 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:19.954 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.954 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.517 nvme0n1 00:33:20.517 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.518 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:20.518 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:20.518 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.518 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.518 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.518 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:20.518 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:20.518 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.518 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.518 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.518 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:20.518 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:33:20.518 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:20.518 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:20.518 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:20.518 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:20.518 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTljNzg0NWZhOWZiN2FkZjc1ZDczMWYzZDU2MjE3YWY3NWE5YmFlNzUzMDhlMDA0ZmRhMTgzNjU3OTc1YzRlNNKnvm8=: 00:33:20.518 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:20.518 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:20.518 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:20.518 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTljNzg0NWZhOWZiN2FkZjc1ZDczMWYzZDU2MjE3YWY3NWE5YmFlNzUzMDhlMDA0ZmRhMTgzNjU3OTc1YzRlNNKnvm8=: 00:33:20.518 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:20.518 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:33:20.518 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:20.518 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:20.518 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:20.518 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:20.518 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:20.518 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:20.518 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.518 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.518 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.518 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:20.518 09:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:20.518 09:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:20.518 09:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:20.518 09:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:20.518 09:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:20.518 09:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:20.518 09:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:20.518 09:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:20.518 09:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:20.518 09:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:20.518 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:20.518 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.518 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.081 nvme0n1 00:33:21.081 09:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.081 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:21.081 09:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.081 09:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.081 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:21.081 09:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.081 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:21.081 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:21.081 09:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.081 09:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.081 09:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.081 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:21.081 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:21.081 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:33:21.081 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:21.081 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:21.081 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:21.081 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:21.081 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkzNzQ3ZDcxM2YyNDY1MjQxZGQ0YzEzMjJmMTRiMGXzFvzn: 00:33:21.081 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTcxYzk5NmM2MzA0NmY4MzJiMGIwNjdlMTY5Y2IwOGRkZWE0ZmM0NGY1OTk5YjRkOTVjOGM2NzA5MzgzODZjZmQbMFE=: 00:33:21.081 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:21.081 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:21.081 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkzNzQ3ZDcxM2YyNDY1MjQxZGQ0YzEzMjJmMTRiMGXzFvzn: 00:33:21.081 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTcxYzk5NmM2MzA0NmY4MzJiMGIwNjdlMTY5Y2IwOGRkZWE0ZmM0NGY1OTk5YjRkOTVjOGM2NzA5MzgzODZjZmQbMFE=: ]] 00:33:21.081 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTcxYzk5NmM2MzA0NmY4MzJiMGIwNjdlMTY5Y2IwOGRkZWE0ZmM0NGY1OTk5YjRkOTVjOGM2NzA5MzgzODZjZmQbMFE=: 00:33:21.081 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:33:21.081 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:21.081 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:21.081 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:21.081 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:21.081 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:21.081 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:21.081 09:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.081 09:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.081 09:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.081 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:21.081 09:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:21.081 09:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:21.081 09:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:21.081 09:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:21.081 09:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:21.081 09:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:21.081 09:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:21.081 09:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:21.081 09:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:21.081 09:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:21.081 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:21.081 09:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.081 09:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.011 nvme0n1 00:33:22.011 09:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.011 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:22.011 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:22.011 09:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.011 09:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.011 09:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.011 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:22.011 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:22.011 09:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.011 09:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.011 09:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.011 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:22.011 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:33:22.011 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:22.011 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:22.011 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:22.011 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:22.011 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTU0ZjY3ZjJkODc2NmI5ZTg2N2RjZDFmZDkyYjIwZGE5ZDUxYmZmYmQ2NmMxYjcxOuJ+qQ==: 00:33:22.011 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: 00:33:22.011 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:22.011 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:22.011 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTU0ZjY3ZjJkODc2NmI5ZTg2N2RjZDFmZDkyYjIwZGE5ZDUxYmZmYmQ2NmMxYjcxOuJ+qQ==: 00:33:22.011 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: ]] 00:33:22.011 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: 00:33:22.011 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:33:22.011 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:22.011 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:22.011 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:22.011 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:22.011 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:22.011 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:22.011 09:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.011 09:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.011 09:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.011 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:22.011 09:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:22.011 09:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:22.011 09:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:22.011 09:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:22.011 09:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:22.011 09:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:22.011 09:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:22.011 09:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:22.011 09:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:22.011 09:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:22.011 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:22.011 09:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.011 09:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.381 nvme0n1 00:33:23.381 09:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.381 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:23.381 09:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.381 09:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.381 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:23.381 09:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.381 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:23.381 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:23.381 09:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.381 09:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.381 09:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.381 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:23.381 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:33:23.381 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:23.381 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:23.381 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:23.381 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:23.381 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzZmOWJhMzk0YTJjYjY1NWFjNTk1ZWYxZDFjZDJhNzkpLKwQ: 00:33:23.381 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2JiNTZmYTE4ZDI4Yzc2Y2NiZTU5YWEyYjg4ZmExZGFsP2rI: 00:33:23.381 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:23.381 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:23.381 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzZmOWJhMzk0YTJjYjY1NWFjNTk1ZWYxZDFjZDJhNzkpLKwQ: 00:33:23.381 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2JiNTZmYTE4ZDI4Yzc2Y2NiZTU5YWEyYjg4ZmExZGFsP2rI: ]] 00:33:23.381 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2JiNTZmYTE4ZDI4Yzc2Y2NiZTU5YWEyYjg4ZmExZGFsP2rI: 00:33:23.381 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:33:23.381 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:23.381 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:23.381 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:23.381 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:23.381 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:23.381 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:23.381 09:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.381 09:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.381 09:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.381 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:23.381 09:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:23.381 09:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:23.381 09:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:23.381 09:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:23.381 09:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:23.381 09:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:23.381 09:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:23.381 09:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:23.381 09:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:23.381 09:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:23.381 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:23.381 09:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.381 09:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.316 nvme0n1 00:33:24.316 09:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.316 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:24.316 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:24.316 09:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.316 09:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.316 09:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.316 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:24.316 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:24.316 09:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.316 09:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.316 09:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.316 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:24.316 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:33:24.316 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:24.316 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:24.316 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:24.316 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:24.316 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDViNDA5NzY3M2Q1MDE2YjA1NWQ2ZTc0YTZiMTNlNjEyZGU3NDZjMzdkMzdkM2I5rKGuKA==: 00:33:24.316 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWIyYjliNGQyZTEwZmI1NTRhYmUyZjUyYTEyN2VmYzIJweoA: 00:33:24.316 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:24.316 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:24.316 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDViNDA5NzY3M2Q1MDE2YjA1NWQ2ZTc0YTZiMTNlNjEyZGU3NDZjMzdkMzdkM2I5rKGuKA==: 00:33:24.316 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWIyYjliNGQyZTEwZmI1NTRhYmUyZjUyYTEyN2VmYzIJweoA: ]] 00:33:24.316 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWIyYjliNGQyZTEwZmI1NTRhYmUyZjUyYTEyN2VmYzIJweoA: 00:33:24.316 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:33:24.316 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:24.316 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:24.316 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:24.316 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:24.316 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:24.316 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:24.316 09:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.316 09:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.316 09:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.316 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:24.316 09:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:24.316 09:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:24.316 09:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:24.316 09:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:24.316 09:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:24.316 09:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:24.316 09:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:24.316 09:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:24.316 09:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:24.316 09:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:24.316 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:24.316 09:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.316 09:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.252 nvme0n1 00:33:25.252 09:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.252 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:25.252 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:25.252 09:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.252 09:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.252 09:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.252 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:25.252 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:25.252 09:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.252 09:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.252 09:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.252 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:25.252 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:33:25.252 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:25.252 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:25.252 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:25.252 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:25.252 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTljNzg0NWZhOWZiN2FkZjc1ZDczMWYzZDU2MjE3YWY3NWE5YmFlNzUzMDhlMDA0ZmRhMTgzNjU3OTc1YzRlNNKnvm8=: 00:33:25.252 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:25.252 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:25.252 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:25.252 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTljNzg0NWZhOWZiN2FkZjc1ZDczMWYzZDU2MjE3YWY3NWE5YmFlNzUzMDhlMDA0ZmRhMTgzNjU3OTc1YzRlNNKnvm8=: 00:33:25.252 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:25.252 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:33:25.252 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:25.252 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:25.252 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:25.252 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:25.252 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:25.252 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:25.252 09:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.252 09:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.252 09:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.252 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:25.252 09:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:25.252 09:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:25.252 09:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:25.252 09:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:25.252 09:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:25.252 09:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:25.252 09:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:25.252 09:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:25.252 09:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:25.252 09:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:25.252 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:25.252 09:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.252 09:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.216 nvme0n1 00:33:26.216 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.216 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:26.216 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.216 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.216 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:26.216 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.216 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:26.216 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:26.216 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.216 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.216 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.216 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:26.216 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:26.216 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:26.216 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:33:26.216 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:26.216 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:26.216 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:26.216 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:26.216 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkzNzQ3ZDcxM2YyNDY1MjQxZGQ0YzEzMjJmMTRiMGXzFvzn: 00:33:26.216 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTcxYzk5NmM2MzA0NmY4MzJiMGIwNjdlMTY5Y2IwOGRkZWE0ZmM0NGY1OTk5YjRkOTVjOGM2NzA5MzgzODZjZmQbMFE=: 00:33:26.216 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:26.216 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:26.216 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkzNzQ3ZDcxM2YyNDY1MjQxZGQ0YzEzMjJmMTRiMGXzFvzn: 00:33:26.216 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTcxYzk5NmM2MzA0NmY4MzJiMGIwNjdlMTY5Y2IwOGRkZWE0ZmM0NGY1OTk5YjRkOTVjOGM2NzA5MzgzODZjZmQbMFE=: ]] 00:33:26.216 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTcxYzk5NmM2MzA0NmY4MzJiMGIwNjdlMTY5Y2IwOGRkZWE0ZmM0NGY1OTk5YjRkOTVjOGM2NzA5MzgzODZjZmQbMFE=: 00:33:26.216 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:33:26.216 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:26.216 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:26.216 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:26.216 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:26.216 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:26.216 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:26.216 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.216 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.216 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.216 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:26.216 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:26.216 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:26.216 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:26.216 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:26.216 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:26.216 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:26.216 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:26.216 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:26.216 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:26.216 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:26.216 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:26.216 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.216 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.475 nvme0n1 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTU0ZjY3ZjJkODc2NmI5ZTg2N2RjZDFmZDkyYjIwZGE5ZDUxYmZmYmQ2NmMxYjcxOuJ+qQ==: 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTU0ZjY3ZjJkODc2NmI5ZTg2N2RjZDFmZDkyYjIwZGE5ZDUxYmZmYmQ2NmMxYjcxOuJ+qQ==: 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: ]] 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.475 nvme0n1 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.475 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.733 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.733 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:26.733 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:26.733 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.733 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.733 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.733 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:26.733 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:33:26.733 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:26.733 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:26.733 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:26.733 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:26.733 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzZmOWJhMzk0YTJjYjY1NWFjNTk1ZWYxZDFjZDJhNzkpLKwQ: 00:33:26.733 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2JiNTZmYTE4ZDI4Yzc2Y2NiZTU5YWEyYjg4ZmExZGFsP2rI: 00:33:26.733 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:26.733 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:26.733 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzZmOWJhMzk0YTJjYjY1NWFjNTk1ZWYxZDFjZDJhNzkpLKwQ: 00:33:26.733 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2JiNTZmYTE4ZDI4Yzc2Y2NiZTU5YWEyYjg4ZmExZGFsP2rI: ]] 00:33:26.733 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2JiNTZmYTE4ZDI4Yzc2Y2NiZTU5YWEyYjg4ZmExZGFsP2rI: 00:33:26.733 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:33:26.733 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:26.733 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:26.733 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:26.733 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:26.733 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:26.733 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:26.733 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.733 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.733 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.733 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:26.733 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:26.733 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:26.733 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:26.733 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:26.733 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:26.733 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:26.733 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:26.733 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:26.733 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:26.733 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:26.733 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:26.733 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.733 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.733 nvme0n1 00:33:26.733 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.733 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:26.733 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:26.733 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.733 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.733 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDViNDA5NzY3M2Q1MDE2YjA1NWQ2ZTc0YTZiMTNlNjEyZGU3NDZjMzdkMzdkM2I5rKGuKA==: 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWIyYjliNGQyZTEwZmI1NTRhYmUyZjUyYTEyN2VmYzIJweoA: 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDViNDA5NzY3M2Q1MDE2YjA1NWQ2ZTc0YTZiMTNlNjEyZGU3NDZjMzdkMzdkM2I5rKGuKA==: 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWIyYjliNGQyZTEwZmI1NTRhYmUyZjUyYTEyN2VmYzIJweoA: ]] 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWIyYjliNGQyZTEwZmI1NTRhYmUyZjUyYTEyN2VmYzIJweoA: 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.990 nvme0n1 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTljNzg0NWZhOWZiN2FkZjc1ZDczMWYzZDU2MjE3YWY3NWE5YmFlNzUzMDhlMDA0ZmRhMTgzNjU3OTc1YzRlNNKnvm8=: 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTljNzg0NWZhOWZiN2FkZjc1ZDczMWYzZDU2MjE3YWY3NWE5YmFlNzUzMDhlMDA0ZmRhMTgzNjU3OTc1YzRlNNKnvm8=: 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.990 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.248 nvme0n1 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkzNzQ3ZDcxM2YyNDY1MjQxZGQ0YzEzMjJmMTRiMGXzFvzn: 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTcxYzk5NmM2MzA0NmY4MzJiMGIwNjdlMTY5Y2IwOGRkZWE0ZmM0NGY1OTk5YjRkOTVjOGM2NzA5MzgzODZjZmQbMFE=: 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkzNzQ3ZDcxM2YyNDY1MjQxZGQ0YzEzMjJmMTRiMGXzFvzn: 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTcxYzk5NmM2MzA0NmY4MzJiMGIwNjdlMTY5Y2IwOGRkZWE0ZmM0NGY1OTk5YjRkOTVjOGM2NzA5MzgzODZjZmQbMFE=: ]] 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTcxYzk5NmM2MzA0NmY4MzJiMGIwNjdlMTY5Y2IwOGRkZWE0ZmM0NGY1OTk5YjRkOTVjOGM2NzA5MzgzODZjZmQbMFE=: 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:27.248 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:27.249 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:27.249 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:27.249 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:27.249 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.249 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.506 nvme0n1 00:33:27.507 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.507 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:27.507 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:27.507 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.507 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.507 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.507 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:27.507 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:27.507 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.507 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.507 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.507 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:27.507 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:33:27.507 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:27.507 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:27.507 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:27.507 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:27.507 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTU0ZjY3ZjJkODc2NmI5ZTg2N2RjZDFmZDkyYjIwZGE5ZDUxYmZmYmQ2NmMxYjcxOuJ+qQ==: 00:33:27.507 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: 00:33:27.507 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:27.507 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:27.507 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTU0ZjY3ZjJkODc2NmI5ZTg2N2RjZDFmZDkyYjIwZGE5ZDUxYmZmYmQ2NmMxYjcxOuJ+qQ==: 00:33:27.507 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: ]] 00:33:27.507 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: 00:33:27.507 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:33:27.507 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:27.507 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:27.507 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:27.507 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:27.507 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:27.507 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:27.507 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.507 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.507 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.507 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:27.507 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:27.507 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:27.507 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:27.507 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:27.507 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:27.507 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:27.507 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:27.507 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:27.507 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:27.507 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:27.507 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:27.507 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.507 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.765 nvme0n1 00:33:27.765 09:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.765 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:27.765 09:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.765 09:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.765 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:27.765 09:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.765 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:27.765 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:27.765 09:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.765 09:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.765 09:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.765 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:27.765 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:33:27.765 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:27.765 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:27.765 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:27.765 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:27.765 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzZmOWJhMzk0YTJjYjY1NWFjNTk1ZWYxZDFjZDJhNzkpLKwQ: 00:33:27.765 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2JiNTZmYTE4ZDI4Yzc2Y2NiZTU5YWEyYjg4ZmExZGFsP2rI: 00:33:27.765 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:27.765 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:27.765 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzZmOWJhMzk0YTJjYjY1NWFjNTk1ZWYxZDFjZDJhNzkpLKwQ: 00:33:27.765 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2JiNTZmYTE4ZDI4Yzc2Y2NiZTU5YWEyYjg4ZmExZGFsP2rI: ]] 00:33:27.765 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2JiNTZmYTE4ZDI4Yzc2Y2NiZTU5YWEyYjg4ZmExZGFsP2rI: 00:33:27.765 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:33:27.765 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:27.765 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:27.765 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:27.765 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:27.765 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:27.765 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:27.765 09:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.765 09:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.765 09:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.765 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:27.765 09:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:27.765 09:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:27.765 09:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:27.765 09:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:27.765 09:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:27.765 09:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:27.765 09:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:27.765 09:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:27.765 09:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:27.765 09:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:27.765 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:27.765 09:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.765 09:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.023 nvme0n1 00:33:28.023 09:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.023 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:28.023 09:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.023 09:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.023 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:28.023 09:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.023 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:28.023 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:28.023 09:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.023 09:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.023 09:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.023 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:28.023 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:33:28.023 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:28.023 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:28.023 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:28.023 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:28.023 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDViNDA5NzY3M2Q1MDE2YjA1NWQ2ZTc0YTZiMTNlNjEyZGU3NDZjMzdkMzdkM2I5rKGuKA==: 00:33:28.023 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWIyYjliNGQyZTEwZmI1NTRhYmUyZjUyYTEyN2VmYzIJweoA: 00:33:28.023 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:28.023 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:28.023 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDViNDA5NzY3M2Q1MDE2YjA1NWQ2ZTc0YTZiMTNlNjEyZGU3NDZjMzdkMzdkM2I5rKGuKA==: 00:33:28.023 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWIyYjliNGQyZTEwZmI1NTRhYmUyZjUyYTEyN2VmYzIJweoA: ]] 00:33:28.023 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWIyYjliNGQyZTEwZmI1NTRhYmUyZjUyYTEyN2VmYzIJweoA: 00:33:28.023 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:33:28.023 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:28.023 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:28.023 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:28.023 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:28.023 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:28.023 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:28.023 09:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.023 09:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.281 09:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.281 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:28.281 09:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:28.281 09:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:28.281 09:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:28.281 09:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:28.281 09:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:28.281 09:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:28.281 09:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:28.281 09:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:28.281 09:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:28.281 09:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:28.281 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:28.281 09:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.281 09:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.281 nvme0n1 00:33:28.281 09:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.281 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:28.281 09:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.281 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:28.281 09:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.281 09:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.281 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:28.281 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:28.282 09:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.282 09:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.282 09:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.282 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:28.282 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:33:28.282 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:28.282 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:28.282 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:28.282 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:28.282 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTljNzg0NWZhOWZiN2FkZjc1ZDczMWYzZDU2MjE3YWY3NWE5YmFlNzUzMDhlMDA0ZmRhMTgzNjU3OTc1YzRlNNKnvm8=: 00:33:28.282 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:28.282 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:28.282 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:28.282 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTljNzg0NWZhOWZiN2FkZjc1ZDczMWYzZDU2MjE3YWY3NWE5YmFlNzUzMDhlMDA0ZmRhMTgzNjU3OTc1YzRlNNKnvm8=: 00:33:28.282 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:28.282 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:33:28.282 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:28.282 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:28.282 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:28.282 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:28.282 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:28.282 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:28.282 09:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.282 09:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.540 09:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.540 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:28.540 09:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:28.540 09:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:28.540 09:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:28.540 09:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:28.540 09:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:28.540 09:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:28.540 09:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:28.540 09:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:28.540 09:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:28.540 09:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:28.540 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:28.540 09:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.540 09:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.540 nvme0n1 00:33:28.540 09:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.540 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:28.540 09:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.540 09:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.540 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:28.540 09:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.540 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:28.540 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:28.540 09:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.540 09:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.540 09:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.798 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:28.798 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:28.798 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:33:28.798 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:28.798 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:28.798 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:28.798 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:28.798 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkzNzQ3ZDcxM2YyNDY1MjQxZGQ0YzEzMjJmMTRiMGXzFvzn: 00:33:28.798 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTcxYzk5NmM2MzA0NmY4MzJiMGIwNjdlMTY5Y2IwOGRkZWE0ZmM0NGY1OTk5YjRkOTVjOGM2NzA5MzgzODZjZmQbMFE=: 00:33:28.798 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:28.798 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:28.798 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkzNzQ3ZDcxM2YyNDY1MjQxZGQ0YzEzMjJmMTRiMGXzFvzn: 00:33:28.798 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTcxYzk5NmM2MzA0NmY4MzJiMGIwNjdlMTY5Y2IwOGRkZWE0ZmM0NGY1OTk5YjRkOTVjOGM2NzA5MzgzODZjZmQbMFE=: ]] 00:33:28.798 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTcxYzk5NmM2MzA0NmY4MzJiMGIwNjdlMTY5Y2IwOGRkZWE0ZmM0NGY1OTk5YjRkOTVjOGM2NzA5MzgzODZjZmQbMFE=: 00:33:28.798 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:33:28.798 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:28.798 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:28.798 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:28.798 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:28.798 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:28.798 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:28.798 09:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.798 09:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.798 09:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.798 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:28.798 09:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:28.798 09:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:28.798 09:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:28.798 09:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:28.798 09:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:28.798 09:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:28.798 09:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:28.798 09:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:28.798 09:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:28.798 09:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:28.798 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:28.798 09:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.798 09:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.055 nvme0n1 00:33:29.055 09:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.055 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:29.055 09:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.055 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:29.055 09:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.055 09:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.055 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:29.055 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:29.055 09:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.055 09:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.055 09:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.055 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:29.055 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:33:29.055 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:29.055 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:29.055 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:29.055 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:29.055 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTU0ZjY3ZjJkODc2NmI5ZTg2N2RjZDFmZDkyYjIwZGE5ZDUxYmZmYmQ2NmMxYjcxOuJ+qQ==: 00:33:29.055 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: 00:33:29.055 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:29.055 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:29.055 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTU0ZjY3ZjJkODc2NmI5ZTg2N2RjZDFmZDkyYjIwZGE5ZDUxYmZmYmQ2NmMxYjcxOuJ+qQ==: 00:33:29.055 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: ]] 00:33:29.055 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: 00:33:29.055 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:33:29.055 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:29.055 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:29.055 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:29.055 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:29.055 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:29.055 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:29.055 09:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.055 09:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.055 09:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.055 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:29.055 09:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:29.055 09:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:29.055 09:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:29.055 09:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:29.055 09:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:29.055 09:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:29.055 09:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:29.055 09:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:29.055 09:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:29.055 09:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:29.055 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:29.055 09:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.055 09:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.310 nvme0n1 00:33:29.310 09:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.310 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:29.311 09:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.311 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:29.311 09:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.311 09:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.311 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:29.311 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:29.311 09:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.311 09:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.311 09:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.311 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:29.311 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:33:29.311 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:29.311 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:29.311 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:29.311 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:29.311 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzZmOWJhMzk0YTJjYjY1NWFjNTk1ZWYxZDFjZDJhNzkpLKwQ: 00:33:29.311 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2JiNTZmYTE4ZDI4Yzc2Y2NiZTU5YWEyYjg4ZmExZGFsP2rI: 00:33:29.311 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:29.311 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:29.311 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzZmOWJhMzk0YTJjYjY1NWFjNTk1ZWYxZDFjZDJhNzkpLKwQ: 00:33:29.311 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2JiNTZmYTE4ZDI4Yzc2Y2NiZTU5YWEyYjg4ZmExZGFsP2rI: ]] 00:33:29.311 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2JiNTZmYTE4ZDI4Yzc2Y2NiZTU5YWEyYjg4ZmExZGFsP2rI: 00:33:29.311 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:33:29.311 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:29.311 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:29.311 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:29.311 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:29.311 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:29.311 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:29.311 09:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.311 09:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.311 09:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.311 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:29.311 09:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:29.311 09:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:29.311 09:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:29.311 09:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:29.311 09:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:29.311 09:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:29.311 09:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:29.311 09:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:29.311 09:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:29.311 09:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:29.311 09:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:29.311 09:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.311 09:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.567 nvme0n1 00:33:29.567 09:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.567 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:29.567 09:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.567 09:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.567 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:29.824 09:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.824 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:29.824 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:29.824 09:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.824 09:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.824 09:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.824 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:29.824 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:33:29.824 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:29.824 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:29.824 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:29.824 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:29.824 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDViNDA5NzY3M2Q1MDE2YjA1NWQ2ZTc0YTZiMTNlNjEyZGU3NDZjMzdkMzdkM2I5rKGuKA==: 00:33:29.824 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWIyYjliNGQyZTEwZmI1NTRhYmUyZjUyYTEyN2VmYzIJweoA: 00:33:29.824 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:29.824 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:29.824 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDViNDA5NzY3M2Q1MDE2YjA1NWQ2ZTc0YTZiMTNlNjEyZGU3NDZjMzdkMzdkM2I5rKGuKA==: 00:33:29.824 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWIyYjliNGQyZTEwZmI1NTRhYmUyZjUyYTEyN2VmYzIJweoA: ]] 00:33:29.824 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWIyYjliNGQyZTEwZmI1NTRhYmUyZjUyYTEyN2VmYzIJweoA: 00:33:29.824 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:33:29.824 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:29.824 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:29.824 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:29.824 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:29.824 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:29.824 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:29.824 09:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.824 09:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.824 09:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.824 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:29.824 09:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:29.824 09:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:29.824 09:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:29.824 09:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:29.824 09:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:29.824 09:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:29.824 09:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:29.824 09:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:29.824 09:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:29.824 09:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:29.824 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:29.824 09:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.824 09:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.080 nvme0n1 00:33:30.081 09:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.081 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:30.081 09:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.081 09:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.081 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:30.081 09:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.081 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:30.081 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:30.081 09:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.081 09:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.081 09:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.081 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:30.081 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:33:30.081 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:30.081 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:30.081 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:30.081 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:30.081 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTljNzg0NWZhOWZiN2FkZjc1ZDczMWYzZDU2MjE3YWY3NWE5YmFlNzUzMDhlMDA0ZmRhMTgzNjU3OTc1YzRlNNKnvm8=: 00:33:30.081 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:30.081 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:30.081 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:30.081 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTljNzg0NWZhOWZiN2FkZjc1ZDczMWYzZDU2MjE3YWY3NWE5YmFlNzUzMDhlMDA0ZmRhMTgzNjU3OTc1YzRlNNKnvm8=: 00:33:30.081 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:30.081 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:33:30.081 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:30.081 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:30.081 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:30.081 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:30.081 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:30.081 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:30.081 09:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.081 09:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.081 09:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.081 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:30.081 09:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:30.081 09:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:30.081 09:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:30.081 09:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:30.081 09:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:30.081 09:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:30.081 09:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:30.081 09:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:30.081 09:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:30.081 09:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:30.081 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:30.081 09:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.081 09:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.338 nvme0n1 00:33:30.338 09:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.338 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:30.338 09:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.338 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:30.338 09:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.338 09:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.338 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:30.338 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:30.338 09:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.338 09:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.596 09:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.596 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:30.596 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:30.596 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:33:30.596 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:30.596 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:30.596 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:30.596 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:30.596 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkzNzQ3ZDcxM2YyNDY1MjQxZGQ0YzEzMjJmMTRiMGXzFvzn: 00:33:30.596 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTcxYzk5NmM2MzA0NmY4MzJiMGIwNjdlMTY5Y2IwOGRkZWE0ZmM0NGY1OTk5YjRkOTVjOGM2NzA5MzgzODZjZmQbMFE=: 00:33:30.596 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:30.596 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:30.596 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkzNzQ3ZDcxM2YyNDY1MjQxZGQ0YzEzMjJmMTRiMGXzFvzn: 00:33:30.596 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTcxYzk5NmM2MzA0NmY4MzJiMGIwNjdlMTY5Y2IwOGRkZWE0ZmM0NGY1OTk5YjRkOTVjOGM2NzA5MzgzODZjZmQbMFE=: ]] 00:33:30.596 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTcxYzk5NmM2MzA0NmY4MzJiMGIwNjdlMTY5Y2IwOGRkZWE0ZmM0NGY1OTk5YjRkOTVjOGM2NzA5MzgzODZjZmQbMFE=: 00:33:30.596 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:33:30.596 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:30.596 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:30.596 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:30.596 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:30.596 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:30.596 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:30.596 09:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.596 09:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.596 09:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.596 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:30.596 09:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:30.596 09:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:30.596 09:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:30.596 09:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:30.596 09:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:30.596 09:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:30.596 09:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:30.596 09:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:30.596 09:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:30.596 09:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:30.596 09:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:30.596 09:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.596 09:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.162 nvme0n1 00:33:31.162 09:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.162 09:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:31.162 09:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:31.162 09:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.162 09:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.162 09:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.162 09:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:31.162 09:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:31.162 09:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.162 09:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.162 09:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.162 09:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:31.162 09:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:33:31.162 09:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:31.162 09:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:31.162 09:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:31.162 09:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:31.162 09:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTU0ZjY3ZjJkODc2NmI5ZTg2N2RjZDFmZDkyYjIwZGE5ZDUxYmZmYmQ2NmMxYjcxOuJ+qQ==: 00:33:31.162 09:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: 00:33:31.162 09:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:31.162 09:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:31.162 09:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTU0ZjY3ZjJkODc2NmI5ZTg2N2RjZDFmZDkyYjIwZGE5ZDUxYmZmYmQ2NmMxYjcxOuJ+qQ==: 00:33:31.162 09:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: ]] 00:33:31.162 09:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: 00:33:31.162 09:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:33:31.162 09:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:31.162 09:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:31.162 09:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:31.162 09:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:31.162 09:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:31.162 09:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:31.162 09:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.162 09:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.162 09:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.162 09:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:31.162 09:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:31.162 09:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:31.162 09:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:31.162 09:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:31.162 09:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:31.162 09:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:31.162 09:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:31.162 09:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:31.162 09:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:31.162 09:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:31.162 09:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:31.162 09:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.162 09:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.727 nvme0n1 00:33:31.727 09:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.727 09:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:31.727 09:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.727 09:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:31.727 09:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.727 09:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.727 09:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:31.727 09:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:31.727 09:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.727 09:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.727 09:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.727 09:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:31.727 09:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:33:31.727 09:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:31.727 09:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:31.727 09:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:31.727 09:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:31.727 09:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzZmOWJhMzk0YTJjYjY1NWFjNTk1ZWYxZDFjZDJhNzkpLKwQ: 00:33:31.727 09:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2JiNTZmYTE4ZDI4Yzc2Y2NiZTU5YWEyYjg4ZmExZGFsP2rI: 00:33:31.727 09:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:31.727 09:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:31.727 09:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzZmOWJhMzk0YTJjYjY1NWFjNTk1ZWYxZDFjZDJhNzkpLKwQ: 00:33:31.727 09:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2JiNTZmYTE4ZDI4Yzc2Y2NiZTU5YWEyYjg4ZmExZGFsP2rI: ]] 00:33:31.727 09:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2JiNTZmYTE4ZDI4Yzc2Y2NiZTU5YWEyYjg4ZmExZGFsP2rI: 00:33:31.727 09:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:33:31.727 09:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:31.727 09:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:31.727 09:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:31.727 09:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:31.727 09:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:31.727 09:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:31.727 09:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.727 09:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.727 09:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.727 09:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:31.727 09:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:31.727 09:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:31.727 09:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:31.727 09:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:31.727 09:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:31.727 09:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:31.727 09:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:31.727 09:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:31.727 09:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:31.727 09:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:31.727 09:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:31.727 09:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.727 09:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.293 nvme0n1 00:33:32.293 09:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.293 09:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:32.293 09:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.293 09:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.293 09:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:32.293 09:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.293 09:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:32.293 09:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:32.293 09:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.293 09:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.293 09:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.293 09:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:32.293 09:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:33:32.293 09:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:32.293 09:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:32.293 09:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:32.293 09:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:32.293 09:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDViNDA5NzY3M2Q1MDE2YjA1NWQ2ZTc0YTZiMTNlNjEyZGU3NDZjMzdkMzdkM2I5rKGuKA==: 00:33:32.293 09:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWIyYjliNGQyZTEwZmI1NTRhYmUyZjUyYTEyN2VmYzIJweoA: 00:33:32.293 09:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:32.293 09:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:32.293 09:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDViNDA5NzY3M2Q1MDE2YjA1NWQ2ZTc0YTZiMTNlNjEyZGU3NDZjMzdkMzdkM2I5rKGuKA==: 00:33:32.293 09:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWIyYjliNGQyZTEwZmI1NTRhYmUyZjUyYTEyN2VmYzIJweoA: ]] 00:33:32.293 09:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWIyYjliNGQyZTEwZmI1NTRhYmUyZjUyYTEyN2VmYzIJweoA: 00:33:32.293 09:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:33:32.293 09:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:32.293 09:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:32.293 09:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:32.293 09:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:32.293 09:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:32.293 09:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:32.293 09:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.293 09:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.293 09:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.293 09:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:32.293 09:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:32.293 09:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:32.293 09:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:32.293 09:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:32.293 09:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:32.293 09:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:32.293 09:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:32.293 09:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:32.293 09:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:32.293 09:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:32.293 09:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:32.293 09:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.293 09:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.857 nvme0n1 00:33:32.857 09:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.857 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:32.857 09:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.857 09:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.857 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:32.857 09:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.857 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:32.857 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:32.857 09:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.857 09:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.115 09:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.115 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:33.115 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:33:33.115 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:33.115 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:33.115 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:33.115 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:33.115 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTljNzg0NWZhOWZiN2FkZjc1ZDczMWYzZDU2MjE3YWY3NWE5YmFlNzUzMDhlMDA0ZmRhMTgzNjU3OTc1YzRlNNKnvm8=: 00:33:33.115 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:33.115 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:33.115 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:33.115 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTljNzg0NWZhOWZiN2FkZjc1ZDczMWYzZDU2MjE3YWY3NWE5YmFlNzUzMDhlMDA0ZmRhMTgzNjU3OTc1YzRlNNKnvm8=: 00:33:33.115 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:33.115 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:33:33.115 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:33.115 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:33.115 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:33.115 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:33.115 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:33.115 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:33.115 09:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.115 09:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.115 09:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.115 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:33.115 09:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:33.115 09:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:33.115 09:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:33.115 09:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:33.115 09:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:33.115 09:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:33.115 09:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:33.115 09:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:33.115 09:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:33.115 09:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:33.115 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:33.115 09:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.115 09:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.680 nvme0n1 00:33:33.680 09:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.680 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:33.680 09:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.680 09:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.680 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:33.680 09:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.680 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:33.680 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:33.680 09:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.680 09:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.680 09:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.680 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:33.680 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:33.680 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:33:33.680 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:33.680 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:33.680 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:33.680 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:33.680 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkzNzQ3ZDcxM2YyNDY1MjQxZGQ0YzEzMjJmMTRiMGXzFvzn: 00:33:33.680 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTcxYzk5NmM2MzA0NmY4MzJiMGIwNjdlMTY5Y2IwOGRkZWE0ZmM0NGY1OTk5YjRkOTVjOGM2NzA5MzgzODZjZmQbMFE=: 00:33:33.680 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:33.680 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:33.680 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkzNzQ3ZDcxM2YyNDY1MjQxZGQ0YzEzMjJmMTRiMGXzFvzn: 00:33:33.680 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTcxYzk5NmM2MzA0NmY4MzJiMGIwNjdlMTY5Y2IwOGRkZWE0ZmM0NGY1OTk5YjRkOTVjOGM2NzA5MzgzODZjZmQbMFE=: ]] 00:33:33.680 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTcxYzk5NmM2MzA0NmY4MzJiMGIwNjdlMTY5Y2IwOGRkZWE0ZmM0NGY1OTk5YjRkOTVjOGM2NzA5MzgzODZjZmQbMFE=: 00:33:33.680 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:33:33.680 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:33.680 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:33.680 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:33.680 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:33.680 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:33.680 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:33.680 09:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.680 09:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.680 09:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.680 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:33.680 09:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:33.680 09:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:33.680 09:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:33.680 09:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:33.680 09:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:33.680 09:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:33.680 09:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:33.680 09:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:33.680 09:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:33.680 09:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:33.680 09:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:33.680 09:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.680 09:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.614 nvme0n1 00:33:34.614 09:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.614 09:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:34.614 09:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.614 09:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.614 09:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:34.614 09:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.614 09:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:34.614 09:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:34.614 09:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.614 09:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.614 09:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.614 09:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:34.614 09:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:33:34.614 09:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:34.614 09:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:34.614 09:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:34.614 09:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:34.614 09:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTU0ZjY3ZjJkODc2NmI5ZTg2N2RjZDFmZDkyYjIwZGE5ZDUxYmZmYmQ2NmMxYjcxOuJ+qQ==: 00:33:34.614 09:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: 00:33:34.614 09:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:34.614 09:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:34.614 09:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTU0ZjY3ZjJkODc2NmI5ZTg2N2RjZDFmZDkyYjIwZGE5ZDUxYmZmYmQ2NmMxYjcxOuJ+qQ==: 00:33:34.614 09:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: ]] 00:33:34.614 09:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: 00:33:34.614 09:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:33:34.614 09:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:34.614 09:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:34.614 09:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:34.614 09:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:34.614 09:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:34.614 09:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:34.614 09:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.614 09:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.614 09:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.614 09:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:34.614 09:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:34.614 09:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:34.614 09:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:34.614 09:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:34.614 09:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:34.614 09:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:34.614 09:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:34.614 09:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:34.614 09:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:34.614 09:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:34.614 09:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:34.614 09:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.614 09:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.549 nvme0n1 00:33:35.549 09:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.549 09:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:35.549 09:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:35.549 09:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.549 09:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.549 09:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.549 09:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:35.549 09:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:35.549 09:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.549 09:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.549 09:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.549 09:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:35.549 09:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:33:35.549 09:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:35.549 09:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:35.549 09:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:35.549 09:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:35.549 09:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzZmOWJhMzk0YTJjYjY1NWFjNTk1ZWYxZDFjZDJhNzkpLKwQ: 00:33:35.549 09:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2JiNTZmYTE4ZDI4Yzc2Y2NiZTU5YWEyYjg4ZmExZGFsP2rI: 00:33:35.549 09:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:35.549 09:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:35.549 09:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzZmOWJhMzk0YTJjYjY1NWFjNTk1ZWYxZDFjZDJhNzkpLKwQ: 00:33:35.549 09:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2JiNTZmYTE4ZDI4Yzc2Y2NiZTU5YWEyYjg4ZmExZGFsP2rI: ]] 00:33:35.549 09:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2JiNTZmYTE4ZDI4Yzc2Y2NiZTU5YWEyYjg4ZmExZGFsP2rI: 00:33:35.549 09:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:33:35.549 09:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:35.549 09:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:35.549 09:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:35.549 09:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:35.549 09:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:35.549 09:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:35.549 09:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.549 09:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.549 09:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.549 09:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:35.549 09:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:35.549 09:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:35.549 09:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:35.549 09:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:35.549 09:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:35.549 09:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:35.549 09:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:35.549 09:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:35.549 09:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:35.549 09:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:35.549 09:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:35.549 09:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.549 09:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.483 nvme0n1 00:33:36.483 09:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:36.483 09:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:36.483 09:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:36.483 09:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:36.483 09:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.483 09:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:36.483 09:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:36.483 09:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:36.483 09:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:36.483 09:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.483 09:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:36.483 09:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:36.483 09:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:33:36.483 09:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:36.483 09:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:36.483 09:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:36.483 09:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:36.483 09:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDViNDA5NzY3M2Q1MDE2YjA1NWQ2ZTc0YTZiMTNlNjEyZGU3NDZjMzdkMzdkM2I5rKGuKA==: 00:33:36.483 09:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWIyYjliNGQyZTEwZmI1NTRhYmUyZjUyYTEyN2VmYzIJweoA: 00:33:36.483 09:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:36.483 09:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:36.483 09:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDViNDA5NzY3M2Q1MDE2YjA1NWQ2ZTc0YTZiMTNlNjEyZGU3NDZjMzdkMzdkM2I5rKGuKA==: 00:33:36.483 09:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWIyYjliNGQyZTEwZmI1NTRhYmUyZjUyYTEyN2VmYzIJweoA: ]] 00:33:36.483 09:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWIyYjliNGQyZTEwZmI1NTRhYmUyZjUyYTEyN2VmYzIJweoA: 00:33:36.483 09:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:33:36.483 09:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:36.483 09:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:36.484 09:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:36.484 09:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:36.484 09:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:36.484 09:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:36.484 09:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:36.484 09:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:36.484 09:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:36.484 09:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:36.484 09:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:36.484 09:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:36.484 09:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:36.484 09:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:36.484 09:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:36.484 09:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:36.484 09:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:36.484 09:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:36.484 09:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:36.484 09:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:36.484 09:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:36.484 09:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:36.484 09:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.857 nvme0n1 00:33:37.857 09:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:37.857 09:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:37.857 09:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:37.857 09:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.857 09:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.857 09:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:37.857 09:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:37.857 09:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:37.857 09:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.857 09:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.857 09:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:37.857 09:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:37.857 09:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:33:37.857 09:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:37.857 09:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:37.857 09:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:37.857 09:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:37.857 09:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTljNzg0NWZhOWZiN2FkZjc1ZDczMWYzZDU2MjE3YWY3NWE5YmFlNzUzMDhlMDA0ZmRhMTgzNjU3OTc1YzRlNNKnvm8=: 00:33:37.857 09:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:37.857 09:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:37.857 09:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:37.857 09:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTljNzg0NWZhOWZiN2FkZjc1ZDczMWYzZDU2MjE3YWY3NWE5YmFlNzUzMDhlMDA0ZmRhMTgzNjU3OTc1YzRlNNKnvm8=: 00:33:37.857 09:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:37.857 09:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:33:37.857 09:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:37.857 09:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:37.857 09:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:37.857 09:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:37.857 09:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:37.857 09:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:37.857 09:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.857 09:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.857 09:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:37.857 09:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:37.857 09:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:37.857 09:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:37.857 09:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:37.857 09:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:37.857 09:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:37.857 09:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:37.857 09:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:37.857 09:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:37.857 09:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:37.857 09:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:37.857 09:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:37.857 09:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.857 09:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.793 nvme0n1 00:33:38.793 09:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:38.793 09:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:38.793 09:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:38.793 09:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:38.793 09:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.793 09:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:38.793 09:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:38.793 09:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:38.793 09:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:38.793 09:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.793 09:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:38.793 09:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:38.793 09:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:38.793 09:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:38.793 09:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:38.793 09:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:38.793 09:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTU0ZjY3ZjJkODc2NmI5ZTg2N2RjZDFmZDkyYjIwZGE5ZDUxYmZmYmQ2NmMxYjcxOuJ+qQ==: 00:33:38.793 09:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: 00:33:38.793 09:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:38.793 09:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:38.793 09:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTU0ZjY3ZjJkODc2NmI5ZTg2N2RjZDFmZDkyYjIwZGE5ZDUxYmZmYmQ2NmMxYjcxOuJ+qQ==: 00:33:38.793 09:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: ]] 00:33:38.793 09:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzExYzlkNGM4YzRlZjgzOTI4YTRhNzQwNmRjNjQzYzRkMzU3Y2U1MjY0ZWI2YWYw4Fl4Ww==: 00:33:38.793 09:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:38.793 09:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:38.793 09:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.793 09:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:38.793 09:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:33:38.793 09:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:38.793 09:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:38.793 09:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:38.793 09:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:38.793 09:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:38.793 09:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:38.793 09:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:38.793 09:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:38.793 09:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:38.793 09:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:38.793 09:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:38.793 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:38.793 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:38.793 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:38.793 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:38.793 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:38.793 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:38.793 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:38.793 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:38.793 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.793 request: 00:33:38.793 { 00:33:38.793 "name": "nvme0", 00:33:38.793 "trtype": "tcp", 00:33:38.793 "traddr": "10.0.0.1", 00:33:38.793 "adrfam": "ipv4", 00:33:38.793 "trsvcid": "4420", 00:33:38.793 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:38.793 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:38.793 "prchk_reftag": false, 00:33:38.793 "prchk_guard": false, 00:33:38.793 "hdgst": false, 00:33:38.793 "ddgst": false, 00:33:38.793 "method": "bdev_nvme_attach_controller", 00:33:38.793 "req_id": 1 00:33:38.793 } 00:33:38.793 Got JSON-RPC error response 00:33:38.793 response: 00:33:38.793 { 00:33:38.793 "code": -5, 00:33:38.793 "message": "Input/output error" 00:33:38.793 } 00:33:38.793 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:38.793 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:38.793 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:38.793 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:38.794 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:38.794 09:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:33:38.794 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:38.794 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.794 09:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:33:38.794 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:38.794 09:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:33:38.794 09:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:33:38.794 09:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:38.794 09:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:38.794 09:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:38.794 09:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:38.794 09:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:38.794 09:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:38.794 09:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:38.794 09:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:38.794 09:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:38.794 09:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:38.794 09:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:38.794 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:38.794 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:38.794 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:38.794 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:38.794 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:38.794 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:38.794 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:38.794 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:38.794 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.794 request: 00:33:38.794 { 00:33:38.794 "name": "nvme0", 00:33:38.794 "trtype": "tcp", 00:33:38.794 "traddr": "10.0.0.1", 00:33:38.794 "adrfam": "ipv4", 00:33:38.794 "trsvcid": "4420", 00:33:38.794 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:38.794 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:38.794 "prchk_reftag": false, 00:33:38.794 "prchk_guard": false, 00:33:38.794 "hdgst": false, 00:33:38.794 "ddgst": false, 00:33:38.794 "dhchap_key": "key2", 00:33:38.794 "method": "bdev_nvme_attach_controller", 00:33:38.794 "req_id": 1 00:33:38.794 } 00:33:38.794 Got JSON-RPC error response 00:33:38.794 response: 00:33:38.794 { 00:33:38.794 "code": -5, 00:33:38.794 "message": "Input/output error" 00:33:38.794 } 00:33:38.794 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:38.794 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:38.794 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:38.794 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:38.794 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:38.794 09:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:33:38.794 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:38.794 09:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:33:38.794 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.794 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.052 request: 00:33:39.052 { 00:33:39.052 "name": "nvme0", 00:33:39.052 "trtype": "tcp", 00:33:39.052 "traddr": "10.0.0.1", 00:33:39.052 "adrfam": "ipv4", 00:33:39.052 "trsvcid": "4420", 00:33:39.052 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:39.052 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:39.052 "prchk_reftag": false, 00:33:39.052 "prchk_guard": false, 00:33:39.052 "hdgst": false, 00:33:39.052 "ddgst": false, 00:33:39.052 "dhchap_key": "key1", 00:33:39.052 "dhchap_ctrlr_key": "ckey2", 00:33:39.052 "method": "bdev_nvme_attach_controller", 00:33:39.052 "req_id": 1 00:33:39.052 } 00:33:39.052 Got JSON-RPC error response 00:33:39.052 response: 00:33:39.052 { 00:33:39.052 "code": -5, 00:33:39.052 "message": "Input/output error" 00:33:39.052 } 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:39.052 rmmod nvme_tcp 00:33:39.052 rmmod nvme_fabrics 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 879098 ']' 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 879098 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 879098 ']' 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 879098 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 879098 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 879098' 00:33:39.052 killing process with pid 879098 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 879098 00:33:39.052 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 879098 00:33:39.309 09:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:39.309 09:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:39.309 09:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:39.309 09:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:39.309 09:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:39.309 09:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:39.309 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:39.309 09:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:41.237 09:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:41.237 09:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:41.237 09:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:41.237 09:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:33:41.237 09:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:33:41.237 09:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:33:41.237 09:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:41.237 09:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:41.237 09:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:41.237 09:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:41.237 09:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:41.237 09:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:33:41.495 09:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:42.871 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:42.871 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:42.871 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:42.871 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:42.871 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:42.871 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:42.871 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:42.872 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:42.872 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:42.872 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:42.872 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:42.872 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:42.872 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:42.872 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:42.872 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:42.872 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:43.809 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:33:43.809 09:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.pMz /tmp/spdk.key-null.OiM /tmp/spdk.key-sha256.BnO /tmp/spdk.key-sha384.TA0 /tmp/spdk.key-sha512.D5r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:33:43.809 09:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:44.744 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:44.744 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:44.744 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:44.744 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:44.744 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:44.744 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:44.744 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:44.744 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:44.744 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:44.744 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:44.744 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:44.744 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:44.744 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:44.744 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:44.744 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:44.744 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:44.744 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:45.003 00:33:45.003 real 0m50.004s 00:33:45.003 user 0m47.883s 00:33:45.003 sys 0m5.668s 00:33:45.003 09:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:45.003 09:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.003 ************************************ 00:33:45.003 END TEST nvmf_auth_host 00:33:45.003 ************************************ 00:33:45.003 09:43:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:45.003 09:43:29 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:33:45.003 09:43:29 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:45.003 09:43:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:45.003 09:43:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:45.003 09:43:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:45.003 ************************************ 00:33:45.003 START TEST nvmf_digest 00:33:45.003 ************************************ 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:45.003 * Looking for test storage... 00:33:45.003 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:33:45.003 09:43:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:46.905 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:46.905 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:46.905 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:46.905 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:46.905 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:46.906 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:46.906 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:46.906 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:46.906 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:46.906 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:46.906 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:46.906 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:46.906 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:46.906 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:46.906 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:46.906 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:46.906 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:46.906 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:47.164 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:47.164 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:47.164 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:47.164 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:47.164 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:47.164 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:47.164 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:47.164 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:47.164 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:33:47.164 00:33:47.164 --- 10.0.0.2 ping statistics --- 00:33:47.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:47.164 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:33:47.164 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:47.164 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:47.164 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:33:47.164 00:33:47.164 --- 10.0.0.1 ping statistics --- 00:33:47.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:47.164 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:33:47.164 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:47.164 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:33:47.164 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:47.164 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:47.164 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:47.164 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:47.164 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:47.164 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:47.164 09:43:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:47.164 09:43:31 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:47.164 09:43:31 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:33:47.164 09:43:31 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:33:47.164 09:43:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:47.164 09:43:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:47.164 09:43:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:47.164 ************************************ 00:33:47.164 START TEST nvmf_digest_clean 00:33:47.164 ************************************ 00:33:47.164 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:33:47.164 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:33:47.164 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:33:47.164 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:33:47.164 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:33:47.164 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:33:47.164 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:47.164 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:47.164 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:47.164 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=888542 00:33:47.164 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:47.164 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 888542 00:33:47.164 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 888542 ']' 00:33:47.164 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:47.164 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:47.164 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:47.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:47.164 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:47.164 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:47.164 [2024-07-14 09:43:31.557700] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:33:47.164 [2024-07-14 09:43:31.557779] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:47.164 EAL: No free 2048 kB hugepages reported on node 1 00:33:47.422 [2024-07-14 09:43:31.623384] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:47.422 [2024-07-14 09:43:31.711283] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:47.422 [2024-07-14 09:43:31.711346] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:47.422 [2024-07-14 09:43:31.711359] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:47.422 [2024-07-14 09:43:31.711369] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:47.422 [2024-07-14 09:43:31.711379] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:47.422 [2024-07-14 09:43:31.711404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:47.422 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:47.422 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:47.422 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:47.422 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:47.422 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:47.422 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:47.422 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:33:47.422 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:33:47.422 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:33:47.422 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:47.422 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:47.680 null0 00:33:47.680 [2024-07-14 09:43:31.912320] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:47.680 [2024-07-14 09:43:31.936529] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:47.680 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:47.680 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:33:47.680 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:47.680 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:47.680 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:47.680 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:47.680 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:47.680 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:47.680 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=888577 00:33:47.680 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 888577 /var/tmp/bperf.sock 00:33:47.680 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:47.680 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 888577 ']' 00:33:47.680 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:47.681 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:47.681 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:47.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:47.681 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:47.681 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:47.681 [2024-07-14 09:43:31.983404] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:33:47.681 [2024-07-14 09:43:31.983468] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid888577 ] 00:33:47.681 EAL: No free 2048 kB hugepages reported on node 1 00:33:47.681 [2024-07-14 09:43:32.045326] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:47.938 [2024-07-14 09:43:32.137367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:47.938 09:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:47.938 09:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:47.938 09:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:47.938 09:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:47.938 09:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:48.196 09:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:48.196 09:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:48.453 nvme0n1 00:33:48.454 09:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:48.454 09:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:48.711 Running I/O for 2 seconds... 00:33:50.615 00:33:50.615 Latency(us) 00:33:50.615 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:50.615 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:50.615 nvme0n1 : 2.01 19804.33 77.36 0.00 0.00 6453.89 3106.89 12039.21 00:33:50.615 =================================================================================================================== 00:33:50.615 Total : 19804.33 77.36 0.00 0.00 6453.89 3106.89 12039.21 00:33:50.615 0 00:33:50.615 09:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:50.615 09:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:50.615 09:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:50.615 09:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:50.615 | select(.opcode=="crc32c") 00:33:50.615 | "\(.module_name) \(.executed)"' 00:33:50.615 09:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:50.874 09:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:50.874 09:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:50.874 09:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:50.874 09:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:50.874 09:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 888577 00:33:50.874 09:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 888577 ']' 00:33:50.874 09:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 888577 00:33:50.874 09:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:50.874 09:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:50.874 09:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 888577 00:33:50.874 09:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:50.874 09:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:50.874 09:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 888577' 00:33:50.874 killing process with pid 888577 00:33:50.874 09:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 888577 00:33:50.874 Received shutdown signal, test time was about 2.000000 seconds 00:33:50.874 00:33:50.874 Latency(us) 00:33:50.874 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:50.874 =================================================================================================================== 00:33:50.874 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:50.874 09:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 888577 00:33:51.133 09:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:33:51.133 09:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:51.133 09:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:51.133 09:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:51.133 09:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:51.133 09:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:51.133 09:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:51.133 09:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=888988 00:33:51.133 09:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:51.133 09:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 888988 /var/tmp/bperf.sock 00:33:51.133 09:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 888988 ']' 00:33:51.133 09:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:51.133 09:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:51.133 09:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:51.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:51.133 09:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:51.133 09:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:51.133 [2024-07-14 09:43:35.508710] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:33:51.133 [2024-07-14 09:43:35.508804] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid888988 ] 00:33:51.133 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:51.133 Zero copy mechanism will not be used. 00:33:51.133 EAL: No free 2048 kB hugepages reported on node 1 00:33:51.133 [2024-07-14 09:43:35.572943] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:51.390 [2024-07-14 09:43:35.664245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:51.390 09:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:51.390 09:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:51.390 09:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:51.391 09:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:51.391 09:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:51.648 09:43:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:51.648 09:43:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:52.214 nvme0n1 00:33:52.214 09:43:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:52.214 09:43:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:52.214 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:52.214 Zero copy mechanism will not be used. 00:33:52.214 Running I/O for 2 seconds... 00:33:54.739 00:33:54.739 Latency(us) 00:33:54.739 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:54.739 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:54.739 nvme0n1 : 2.01 2369.83 296.23 0.00 0.00 6746.03 6359.42 8543.95 00:33:54.739 =================================================================================================================== 00:33:54.739 Total : 2369.83 296.23 0.00 0.00 6746.03 6359.42 8543.95 00:33:54.739 0 00:33:54.739 09:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:54.739 09:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:54.739 09:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:54.739 09:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:54.739 | select(.opcode=="crc32c") 00:33:54.739 | "\(.module_name) \(.executed)"' 00:33:54.739 09:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:54.739 09:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:54.739 09:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:54.739 09:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:54.739 09:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:54.739 09:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 888988 00:33:54.739 09:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 888988 ']' 00:33:54.739 09:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 888988 00:33:54.739 09:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:54.739 09:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:54.739 09:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 888988 00:33:54.739 09:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:54.739 09:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:54.739 09:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 888988' 00:33:54.739 killing process with pid 888988 00:33:54.739 09:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 888988 00:33:54.739 Received shutdown signal, test time was about 2.000000 seconds 00:33:54.739 00:33:54.739 Latency(us) 00:33:54.739 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:54.739 =================================================================================================================== 00:33:54.739 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:54.739 09:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 888988 00:33:54.739 09:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:33:54.739 09:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:54.739 09:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:54.739 09:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:54.739 09:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:54.739 09:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:54.739 09:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:54.739 09:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=889397 00:33:54.739 09:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:54.739 09:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 889397 /var/tmp/bperf.sock 00:33:54.739 09:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 889397 ']' 00:33:54.739 09:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:54.739 09:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:54.740 09:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:54.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:54.740 09:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:54.740 09:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:54.740 [2024-07-14 09:43:39.156118] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:33:54.740 [2024-07-14 09:43:39.156228] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid889397 ] 00:33:54.740 EAL: No free 2048 kB hugepages reported on node 1 00:33:54.998 [2024-07-14 09:43:39.215087] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:54.998 [2024-07-14 09:43:39.300922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:54.998 09:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:54.998 09:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:54.998 09:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:54.998 09:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:54.998 09:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:55.256 09:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:55.256 09:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:55.821 nvme0n1 00:33:55.821 09:43:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:55.821 09:43:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:55.821 Running I/O for 2 seconds... 00:33:58.348 00:33:58.348 Latency(us) 00:33:58.348 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:58.348 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:58.348 nvme0n1 : 2.01 19321.14 75.47 0.00 0.00 6609.89 5922.51 15631.55 00:33:58.349 =================================================================================================================== 00:33:58.349 Total : 19321.14 75.47 0.00 0.00 6609.89 5922.51 15631.55 00:33:58.349 0 00:33:58.349 09:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:58.349 09:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:58.349 09:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:58.349 09:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:58.349 | select(.opcode=="crc32c") 00:33:58.349 | "\(.module_name) \(.executed)"' 00:33:58.349 09:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:58.349 09:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:58.349 09:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:58.349 09:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:58.349 09:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:58.349 09:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 889397 00:33:58.349 09:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 889397 ']' 00:33:58.349 09:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 889397 00:33:58.349 09:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:58.349 09:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:58.349 09:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 889397 00:33:58.349 09:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:58.349 09:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:58.349 09:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 889397' 00:33:58.349 killing process with pid 889397 00:33:58.349 09:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 889397 00:33:58.349 Received shutdown signal, test time was about 2.000000 seconds 00:33:58.349 00:33:58.349 Latency(us) 00:33:58.349 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:58.349 =================================================================================================================== 00:33:58.349 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:58.349 09:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 889397 00:33:58.349 09:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:33:58.349 09:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:58.349 09:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:58.349 09:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:58.349 09:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:58.349 09:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:58.349 09:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:58.349 09:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=889803 00:33:58.349 09:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:58.349 09:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 889803 /var/tmp/bperf.sock 00:33:58.349 09:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 889803 ']' 00:33:58.349 09:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:58.349 09:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:58.349 09:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:58.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:58.349 09:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:58.349 09:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:58.349 [2024-07-14 09:43:42.788757] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:33:58.349 [2024-07-14 09:43:42.788850] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid889803 ] 00:33:58.349 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:58.349 Zero copy mechanism will not be used. 00:33:58.608 EAL: No free 2048 kB hugepages reported on node 1 00:33:58.608 [2024-07-14 09:43:42.854009] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:58.608 [2024-07-14 09:43:42.945353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:58.608 09:43:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:58.608 09:43:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:58.608 09:43:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:58.608 09:43:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:58.608 09:43:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:59.268 09:43:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:59.268 09:43:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:59.268 nvme0n1 00:33:59.268 09:43:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:59.268 09:43:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:59.526 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:59.526 Zero copy mechanism will not be used. 00:33:59.526 Running I/O for 2 seconds... 00:34:01.421 00:34:01.421 Latency(us) 00:34:01.421 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:01.421 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:34:01.421 nvme0n1 : 2.01 1508.14 188.52 0.00 0.00 10576.95 8349.77 21456.97 00:34:01.421 =================================================================================================================== 00:34:01.421 Total : 1508.14 188.52 0.00 0.00 10576.95 8349.77 21456.97 00:34:01.421 0 00:34:01.421 09:43:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:01.421 09:43:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:01.421 09:43:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:01.421 09:43:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:01.421 09:43:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:01.421 | select(.opcode=="crc32c") 00:34:01.421 | "\(.module_name) \(.executed)"' 00:34:01.678 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:01.678 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:01.678 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:01.678 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:01.678 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 889803 00:34:01.678 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 889803 ']' 00:34:01.678 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 889803 00:34:01.678 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:34:01.678 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:01.678 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 889803 00:34:01.678 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:01.678 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:01.678 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 889803' 00:34:01.678 killing process with pid 889803 00:34:01.678 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 889803 00:34:01.678 Received shutdown signal, test time was about 2.000000 seconds 00:34:01.678 00:34:01.678 Latency(us) 00:34:01.678 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:01.678 =================================================================================================================== 00:34:01.678 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:01.678 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 889803 00:34:01.936 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 888542 00:34:01.936 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 888542 ']' 00:34:01.936 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 888542 00:34:01.936 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:34:01.936 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:01.936 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 888542 00:34:01.936 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:01.936 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:01.936 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 888542' 00:34:01.936 killing process with pid 888542 00:34:01.936 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 888542 00:34:01.936 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 888542 00:34:02.194 00:34:02.194 real 0m15.077s 00:34:02.194 user 0m30.398s 00:34:02.194 sys 0m3.829s 00:34:02.194 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:02.194 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:02.194 ************************************ 00:34:02.194 END TEST nvmf_digest_clean 00:34:02.194 ************************************ 00:34:02.194 09:43:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:34:02.194 09:43:46 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:34:02.194 09:43:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:02.194 09:43:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:02.194 09:43:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:02.194 ************************************ 00:34:02.194 START TEST nvmf_digest_error 00:34:02.194 ************************************ 00:34:02.194 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:34:02.194 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:34:02.194 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:02.194 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:02.194 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:02.194 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=890354 00:34:02.194 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:34:02.194 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 890354 00:34:02.194 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 890354 ']' 00:34:02.194 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:02.194 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:02.194 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:02.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:02.194 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:02.194 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:02.451 [2024-07-14 09:43:46.686298] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:34:02.451 [2024-07-14 09:43:46.686386] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:02.451 EAL: No free 2048 kB hugepages reported on node 1 00:34:02.451 [2024-07-14 09:43:46.750726] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:02.451 [2024-07-14 09:43:46.834356] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:02.451 [2024-07-14 09:43:46.834430] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:02.451 [2024-07-14 09:43:46.834453] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:02.451 [2024-07-14 09:43:46.834463] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:02.451 [2024-07-14 09:43:46.834472] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:02.451 [2024-07-14 09:43:46.834504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:02.451 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:02.451 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:34:02.451 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:02.451 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:02.451 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:02.708 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:02.708 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:34:02.708 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.708 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:02.708 [2024-07-14 09:43:46.923072] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:34:02.708 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.708 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:34:02.708 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:34:02.708 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.708 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:02.708 null0 00:34:02.708 [2024-07-14 09:43:47.037965] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:02.708 [2024-07-14 09:43:47.062183] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:02.708 09:43:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.708 09:43:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:34:02.708 09:43:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:02.708 09:43:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:34:02.708 09:43:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:34:02.708 09:43:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:34:02.708 09:43:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=890378 00:34:02.708 09:43:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 890378 /var/tmp/bperf.sock 00:34:02.708 09:43:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:34:02.708 09:43:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 890378 ']' 00:34:02.708 09:43:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:02.708 09:43:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:02.708 09:43:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:02.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:02.708 09:43:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:02.708 09:43:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:02.708 [2024-07-14 09:43:47.108206] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:34:02.708 [2024-07-14 09:43:47.108266] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid890378 ] 00:34:02.708 EAL: No free 2048 kB hugepages reported on node 1 00:34:02.966 [2024-07-14 09:43:47.168416] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:02.966 [2024-07-14 09:43:47.259271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:02.966 09:43:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:02.966 09:43:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:34:02.966 09:43:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:02.966 09:43:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:03.223 09:43:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:03.223 09:43:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.223 09:43:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:03.223 09:43:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.223 09:43:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:03.223 09:43:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:03.785 nvme0n1 00:34:03.785 09:43:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:34:03.785 09:43:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.785 09:43:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:03.785 09:43:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.785 09:43:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:03.785 09:43:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:03.785 Running I/O for 2 seconds... 00:34:04.042 [2024-07-14 09:43:48.242893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.042 [2024-07-14 09:43:48.242953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.042 [2024-07-14 09:43:48.242976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.042 [2024-07-14 09:43:48.257744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.042 [2024-07-14 09:43:48.257780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.042 [2024-07-14 09:43:48.257806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.042 [2024-07-14 09:43:48.272989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.042 [2024-07-14 09:43:48.273021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.042 [2024-07-14 09:43:48.273048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.042 [2024-07-14 09:43:48.285140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.042 [2024-07-14 09:43:48.285171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.042 [2024-07-14 09:43:48.285211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.042 [2024-07-14 09:43:48.299643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.042 [2024-07-14 09:43:48.299679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.042 [2024-07-14 09:43:48.299698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.042 [2024-07-14 09:43:48.313631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.042 [2024-07-14 09:43:48.313665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.042 [2024-07-14 09:43:48.313684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.042 [2024-07-14 09:43:48.325829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.042 [2024-07-14 09:43:48.325864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:10886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.042 [2024-07-14 09:43:48.325922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.042 [2024-07-14 09:43:48.340784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.042 [2024-07-14 09:43:48.340818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.042 [2024-07-14 09:43:48.340837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.042 [2024-07-14 09:43:48.354922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.042 [2024-07-14 09:43:48.354954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.042 [2024-07-14 09:43:48.354974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.042 [2024-07-14 09:43:48.366926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.042 [2024-07-14 09:43:48.366958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.042 [2024-07-14 09:43:48.366977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.042 [2024-07-14 09:43:48.382789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.042 [2024-07-14 09:43:48.382825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.042 [2024-07-14 09:43:48.382852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.042 [2024-07-14 09:43:48.395269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.042 [2024-07-14 09:43:48.395304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.042 [2024-07-14 09:43:48.395323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.042 [2024-07-14 09:43:48.410415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.042 [2024-07-14 09:43:48.410450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.042 [2024-07-14 09:43:48.410469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.042 [2024-07-14 09:43:48.422061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.042 [2024-07-14 09:43:48.422105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.043 [2024-07-14 09:43:48.422123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.043 [2024-07-14 09:43:48.435886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.043 [2024-07-14 09:43:48.435933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.043 [2024-07-14 09:43:48.435952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.043 [2024-07-14 09:43:48.449272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.043 [2024-07-14 09:43:48.449306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.043 [2024-07-14 09:43:48.449326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.043 [2024-07-14 09:43:48.464007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.043 [2024-07-14 09:43:48.464039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.043 [2024-07-14 09:43:48.464056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.043 [2024-07-14 09:43:48.476174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.043 [2024-07-14 09:43:48.476222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.043 [2024-07-14 09:43:48.476243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.043 [2024-07-14 09:43:48.490692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.043 [2024-07-14 09:43:48.490727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.043 [2024-07-14 09:43:48.490754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.299 [2024-07-14 09:43:48.504812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.299 [2024-07-14 09:43:48.504849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.299 [2024-07-14 09:43:48.504879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.299 [2024-07-14 09:43:48.517885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.299 [2024-07-14 09:43:48.517932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.299 [2024-07-14 09:43:48.517950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.299 [2024-07-14 09:43:48.531885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.299 [2024-07-14 09:43:48.531936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.299 [2024-07-14 09:43:48.531954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.299 [2024-07-14 09:43:48.544285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.299 [2024-07-14 09:43:48.544324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.299 [2024-07-14 09:43:48.544343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.299 [2024-07-14 09:43:48.558897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.299 [2024-07-14 09:43:48.558946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.299 [2024-07-14 09:43:48.558964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.299 [2024-07-14 09:43:48.573518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.299 [2024-07-14 09:43:48.573552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:11788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.299 [2024-07-14 09:43:48.573583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.299 [2024-07-14 09:43:48.584916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.299 [2024-07-14 09:43:48.584947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.299 [2024-07-14 09:43:48.584964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.299 [2024-07-14 09:43:48.599826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.299 [2024-07-14 09:43:48.599877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.299 [2024-07-14 09:43:48.599899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.299 [2024-07-14 09:43:48.613745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.299 [2024-07-14 09:43:48.613779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.299 [2024-07-14 09:43:48.613806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.299 [2024-07-14 09:43:48.626806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.299 [2024-07-14 09:43:48.626839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.299 [2024-07-14 09:43:48.626859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.299 [2024-07-14 09:43:48.639114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.299 [2024-07-14 09:43:48.639157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.299 [2024-07-14 09:43:48.639176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.299 [2024-07-14 09:43:48.653405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.299 [2024-07-14 09:43:48.653439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.300 [2024-07-14 09:43:48.653458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.300 [2024-07-14 09:43:48.668118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.300 [2024-07-14 09:43:48.668159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.300 [2024-07-14 09:43:48.668178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.300 [2024-07-14 09:43:48.680463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.300 [2024-07-14 09:43:48.680497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.300 [2024-07-14 09:43:48.680516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.300 [2024-07-14 09:43:48.695243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.300 [2024-07-14 09:43:48.695277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.300 [2024-07-14 09:43:48.695296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.300 [2024-07-14 09:43:48.708544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.300 [2024-07-14 09:43:48.708579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.300 [2024-07-14 09:43:48.708599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.300 [2024-07-14 09:43:48.722410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.300 [2024-07-14 09:43:48.722444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.300 [2024-07-14 09:43:48.722463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.300 [2024-07-14 09:43:48.737246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.300 [2024-07-14 09:43:48.737286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.300 [2024-07-14 09:43:48.737306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.300 [2024-07-14 09:43:48.749472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.300 [2024-07-14 09:43:48.749506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.300 [2024-07-14 09:43:48.749525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.558 [2024-07-14 09:43:48.764679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.558 [2024-07-14 09:43:48.764713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.558 [2024-07-14 09:43:48.764733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.558 [2024-07-14 09:43:48.775938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.558 [2024-07-14 09:43:48.775968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.558 [2024-07-14 09:43:48.775988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.558 [2024-07-14 09:43:48.792555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.558 [2024-07-14 09:43:48.792589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.558 [2024-07-14 09:43:48.792608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.558 [2024-07-14 09:43:48.803941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.558 [2024-07-14 09:43:48.803969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.558 [2024-07-14 09:43:48.803987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.558 [2024-07-14 09:43:48.817862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.558 [2024-07-14 09:43:48.817916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.558 [2024-07-14 09:43:48.817936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.558 [2024-07-14 09:43:48.831185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.558 [2024-07-14 09:43:48.831218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.558 [2024-07-14 09:43:48.831237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.558 [2024-07-14 09:43:48.845796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.558 [2024-07-14 09:43:48.845830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.558 [2024-07-14 09:43:48.845853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.558 [2024-07-14 09:43:48.858952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.558 [2024-07-14 09:43:48.858981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.558 [2024-07-14 09:43:48.859000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.558 [2024-07-14 09:43:48.872745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.558 [2024-07-14 09:43:48.872778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.558 [2024-07-14 09:43:48.872797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.558 [2024-07-14 09:43:48.886927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.558 [2024-07-14 09:43:48.886973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.558 [2024-07-14 09:43:48.886991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.558 [2024-07-14 09:43:48.899849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.558 [2024-07-14 09:43:48.899889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.558 [2024-07-14 09:43:48.899909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.558 [2024-07-14 09:43:48.912905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.558 [2024-07-14 09:43:48.912934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.558 [2024-07-14 09:43:48.912952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.558 [2024-07-14 09:43:48.927930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.558 [2024-07-14 09:43:48.927960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:25468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.558 [2024-07-14 09:43:48.927977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.558 [2024-07-14 09:43:48.940041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.558 [2024-07-14 09:43:48.940071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.558 [2024-07-14 09:43:48.940089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.558 [2024-07-14 09:43:48.954475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.558 [2024-07-14 09:43:48.954507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.558 [2024-07-14 09:43:48.954527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.558 [2024-07-14 09:43:48.968516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.558 [2024-07-14 09:43:48.968552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.558 [2024-07-14 09:43:48.968580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.558 [2024-07-14 09:43:48.982202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.558 [2024-07-14 09:43:48.982237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.559 [2024-07-14 09:43:48.982256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.559 [2024-07-14 09:43:48.994997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.559 [2024-07-14 09:43:48.995028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.559 [2024-07-14 09:43:48.995048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.559 [2024-07-14 09:43:49.010189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.559 [2024-07-14 09:43:49.010220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.559 [2024-07-14 09:43:49.010237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.817 [2024-07-14 09:43:49.023393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.817 [2024-07-14 09:43:49.023427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.817 [2024-07-14 09:43:49.023446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.817 [2024-07-14 09:43:49.036535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.817 [2024-07-14 09:43:49.036570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.817 [2024-07-14 09:43:49.036589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.817 [2024-07-14 09:43:49.050666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.817 [2024-07-14 09:43:49.050701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.817 [2024-07-14 09:43:49.050720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.817 [2024-07-14 09:43:49.063827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.817 [2024-07-14 09:43:49.063861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.817 [2024-07-14 09:43:49.063889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.817 [2024-07-14 09:43:49.077539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.817 [2024-07-14 09:43:49.077573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.818 [2024-07-14 09:43:49.077593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.818 [2024-07-14 09:43:49.091547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.818 [2024-07-14 09:43:49.091587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.818 [2024-07-14 09:43:49.091607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.818 [2024-07-14 09:43:49.103827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.818 [2024-07-14 09:43:49.103861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.818 [2024-07-14 09:43:49.103907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.818 [2024-07-14 09:43:49.118283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.818 [2024-07-14 09:43:49.118318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.818 [2024-07-14 09:43:49.118337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.818 [2024-07-14 09:43:49.132411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.818 [2024-07-14 09:43:49.132445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.818 [2024-07-14 09:43:49.132464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.818 [2024-07-14 09:43:49.144465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.818 [2024-07-14 09:43:49.144500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.818 [2024-07-14 09:43:49.144519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.818 [2024-07-14 09:43:49.159287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.818 [2024-07-14 09:43:49.159321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.818 [2024-07-14 09:43:49.159340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.818 [2024-07-14 09:43:49.172425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.818 [2024-07-14 09:43:49.172459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.818 [2024-07-14 09:43:49.172478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.818 [2024-07-14 09:43:49.185428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.818 [2024-07-14 09:43:49.185461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.818 [2024-07-14 09:43:49.185480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.818 [2024-07-14 09:43:49.201025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.818 [2024-07-14 09:43:49.201055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.818 [2024-07-14 09:43:49.201088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.818 [2024-07-14 09:43:49.214331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.818 [2024-07-14 09:43:49.214366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.818 [2024-07-14 09:43:49.214385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.818 [2024-07-14 09:43:49.227211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.818 [2024-07-14 09:43:49.227246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.818 [2024-07-14 09:43:49.227267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.818 [2024-07-14 09:43:49.241382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.818 [2024-07-14 09:43:49.241417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.818 [2024-07-14 09:43:49.241437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.818 [2024-07-14 09:43:49.254970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.818 [2024-07-14 09:43:49.255001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.818 [2024-07-14 09:43:49.255018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.818 [2024-07-14 09:43:49.268548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:04.818 [2024-07-14 09:43:49.268583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.818 [2024-07-14 09:43:49.268603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.076 [2024-07-14 09:43:49.282278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.076 [2024-07-14 09:43:49.282313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.076 [2024-07-14 09:43:49.282333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.076 [2024-07-14 09:43:49.295640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.076 [2024-07-14 09:43:49.295675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.076 [2024-07-14 09:43:49.295694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.076 [2024-07-14 09:43:49.309514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.076 [2024-07-14 09:43:49.309549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.076 [2024-07-14 09:43:49.309568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.076 [2024-07-14 09:43:49.324181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.076 [2024-07-14 09:43:49.324230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.076 [2024-07-14 09:43:49.324256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.076 [2024-07-14 09:43:49.337364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.076 [2024-07-14 09:43:49.337398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.076 [2024-07-14 09:43:49.337418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.076 [2024-07-14 09:43:49.350802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.076 [2024-07-14 09:43:49.350835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.076 [2024-07-14 09:43:49.350855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.076 [2024-07-14 09:43:49.363932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.076 [2024-07-14 09:43:49.363962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.076 [2024-07-14 09:43:49.363980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.076 [2024-07-14 09:43:49.377052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.076 [2024-07-14 09:43:49.377084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.076 [2024-07-14 09:43:49.377102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.076 [2024-07-14 09:43:49.391535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.076 [2024-07-14 09:43:49.391571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.077 [2024-07-14 09:43:49.391590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.077 [2024-07-14 09:43:49.406115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.077 [2024-07-14 09:43:49.406161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.077 [2024-07-14 09:43:49.406182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.077 [2024-07-14 09:43:49.419782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.077 [2024-07-14 09:43:49.419816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.077 [2024-07-14 09:43:49.419836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.077 [2024-07-14 09:43:49.432031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.077 [2024-07-14 09:43:49.432077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.077 [2024-07-14 09:43:49.432095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.077 [2024-07-14 09:43:49.446619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.077 [2024-07-14 09:43:49.446653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.077 [2024-07-14 09:43:49.446672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.077 [2024-07-14 09:43:49.459291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.077 [2024-07-14 09:43:49.459325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.077 [2024-07-14 09:43:49.459344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.077 [2024-07-14 09:43:49.472617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.077 [2024-07-14 09:43:49.472652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.077 [2024-07-14 09:43:49.472671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.077 [2024-07-14 09:43:49.486805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.077 [2024-07-14 09:43:49.486839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.077 [2024-07-14 09:43:49.486858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.077 [2024-07-14 09:43:49.501652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.077 [2024-07-14 09:43:49.501686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.077 [2024-07-14 09:43:49.501705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.077 [2024-07-14 09:43:49.513988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.077 [2024-07-14 09:43:49.514019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.077 [2024-07-14 09:43:49.514037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.334 [2024-07-14 09:43:49.529592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.334 [2024-07-14 09:43:49.529627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.334 [2024-07-14 09:43:49.529647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.334 [2024-07-14 09:43:49.541672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.334 [2024-07-14 09:43:49.541706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.334 [2024-07-14 09:43:49.541725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.334 [2024-07-14 09:43:49.556518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.334 [2024-07-14 09:43:49.556552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.334 [2024-07-14 09:43:49.556578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.334 [2024-07-14 09:43:49.570854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.334 [2024-07-14 09:43:49.570895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.334 [2024-07-14 09:43:49.570929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.334 [2024-07-14 09:43:49.584847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.334 [2024-07-14 09:43:49.584891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.334 [2024-07-14 09:43:49.584927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.334 [2024-07-14 09:43:49.597182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.334 [2024-07-14 09:43:49.597231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.334 [2024-07-14 09:43:49.597253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.334 [2024-07-14 09:43:49.612467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.334 [2024-07-14 09:43:49.612504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.334 [2024-07-14 09:43:49.612523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.334 [2024-07-14 09:43:49.625414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.334 [2024-07-14 09:43:49.625461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.334 [2024-07-14 09:43:49.625478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.334 [2024-07-14 09:43:49.637593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.334 [2024-07-14 09:43:49.637625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.334 [2024-07-14 09:43:49.637643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.334 [2024-07-14 09:43:49.651648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.334 [2024-07-14 09:43:49.651680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.334 [2024-07-14 09:43:49.651698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.334 [2024-07-14 09:43:49.664507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.334 [2024-07-14 09:43:49.664555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.334 [2024-07-14 09:43:49.664572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.334 [2024-07-14 09:43:49.675819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.334 [2024-07-14 09:43:49.675883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:16848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.334 [2024-07-14 09:43:49.675919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.334 [2024-07-14 09:43:49.689531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.334 [2024-07-14 09:43:49.689563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.334 [2024-07-14 09:43:49.689581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.334 [2024-07-14 09:43:49.701317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.334 [2024-07-14 09:43:49.701349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.334 [2024-07-14 09:43:49.701366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.334 [2024-07-14 09:43:49.715050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.334 [2024-07-14 09:43:49.715081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.334 [2024-07-14 09:43:49.715097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.334 [2024-07-14 09:43:49.726809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.334 [2024-07-14 09:43:49.726852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.334 [2024-07-14 09:43:49.726877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.334 [2024-07-14 09:43:49.741511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.334 [2024-07-14 09:43:49.741542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.334 [2024-07-14 09:43:49.741559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.334 [2024-07-14 09:43:49.755014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.334 [2024-07-14 09:43:49.755045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.334 [2024-07-14 09:43:49.755077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.334 [2024-07-14 09:43:49.767506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.334 [2024-07-14 09:43:49.767539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.334 [2024-07-14 09:43:49.767558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.334 [2024-07-14 09:43:49.783649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.334 [2024-07-14 09:43:49.783679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.334 [2024-07-14 09:43:49.783696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.592 [2024-07-14 09:43:49.796440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.592 [2024-07-14 09:43:49.796474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.592 [2024-07-14 09:43:49.796493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.592 [2024-07-14 09:43:49.809732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.592 [2024-07-14 09:43:49.809766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.592 [2024-07-14 09:43:49.809785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.592 [2024-07-14 09:43:49.824158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.592 [2024-07-14 09:43:49.824206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.592 [2024-07-14 09:43:49.824225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.592 [2024-07-14 09:43:49.836857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.592 [2024-07-14 09:43:49.836914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.592 [2024-07-14 09:43:49.836932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.592 [2024-07-14 09:43:49.851370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.592 [2024-07-14 09:43:49.851405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.592 [2024-07-14 09:43:49.851425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.592 [2024-07-14 09:43:49.864940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.592 [2024-07-14 09:43:49.864971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.592 [2024-07-14 09:43:49.864988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.592 [2024-07-14 09:43:49.877414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.592 [2024-07-14 09:43:49.877448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.592 [2024-07-14 09:43:49.877467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.592 [2024-07-14 09:43:49.890861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.592 [2024-07-14 09:43:49.890916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.592 [2024-07-14 09:43:49.890933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.592 [2024-07-14 09:43:49.905960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.592 [2024-07-14 09:43:49.905991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.592 [2024-07-14 09:43:49.906013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.592 [2024-07-14 09:43:49.918263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.592 [2024-07-14 09:43:49.918298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.592 [2024-07-14 09:43:49.918317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.592 [2024-07-14 09:43:49.934097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.592 [2024-07-14 09:43:49.934125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.592 [2024-07-14 09:43:49.934155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.592 [2024-07-14 09:43:49.945845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.592 [2024-07-14 09:43:49.945887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.592 [2024-07-14 09:43:49.945922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.592 [2024-07-14 09:43:49.961752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.592 [2024-07-14 09:43:49.961788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.592 [2024-07-14 09:43:49.961807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.592 [2024-07-14 09:43:49.974682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.592 [2024-07-14 09:43:49.974716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.592 [2024-07-14 09:43:49.974736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.592 [2024-07-14 09:43:49.987102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.592 [2024-07-14 09:43:49.987131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.592 [2024-07-14 09:43:49.987161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.592 [2024-07-14 09:43:50.002370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.592 [2024-07-14 09:43:50.002406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.592 [2024-07-14 09:43:50.002426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.593 [2024-07-14 09:43:50.016010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.593 [2024-07-14 09:43:50.016075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.593 [2024-07-14 09:43:50.016094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.593 [2024-07-14 09:43:50.028864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.593 [2024-07-14 09:43:50.028913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.593 [2024-07-14 09:43:50.028932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.593 [2024-07-14 09:43:50.041996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.593 [2024-07-14 09:43:50.042029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.593 [2024-07-14 09:43:50.042047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.850 [2024-07-14 09:43:50.055487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.850 [2024-07-14 09:43:50.055524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.850 [2024-07-14 09:43:50.055543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.850 [2024-07-14 09:43:50.069776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.850 [2024-07-14 09:43:50.069812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.850 [2024-07-14 09:43:50.069831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.850 [2024-07-14 09:43:50.083880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.850 [2024-07-14 09:43:50.083915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.850 [2024-07-14 09:43:50.083947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.850 [2024-07-14 09:43:50.097122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.850 [2024-07-14 09:43:50.097153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:11921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.851 [2024-07-14 09:43:50.097170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.851 [2024-07-14 09:43:50.111407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.851 [2024-07-14 09:43:50.111442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.851 [2024-07-14 09:43:50.111462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.851 [2024-07-14 09:43:50.125234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.851 [2024-07-14 09:43:50.125269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.851 [2024-07-14 09:43:50.125288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.851 [2024-07-14 09:43:50.138935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.851 [2024-07-14 09:43:50.138965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.851 [2024-07-14 09:43:50.138983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.851 [2024-07-14 09:43:50.151569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.851 [2024-07-14 09:43:50.151603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.851 [2024-07-14 09:43:50.151622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.851 [2024-07-14 09:43:50.165310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.851 [2024-07-14 09:43:50.165344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.851 [2024-07-14 09:43:50.165363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.851 [2024-07-14 09:43:50.178757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.851 [2024-07-14 09:43:50.178791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.851 [2024-07-14 09:43:50.178810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.851 [2024-07-14 09:43:50.191792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.851 [2024-07-14 09:43:50.191826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.851 [2024-07-14 09:43:50.191845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.851 [2024-07-14 09:43:50.205766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.851 [2024-07-14 09:43:50.205799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.851 [2024-07-14 09:43:50.205818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.851 [2024-07-14 09:43:50.219641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcf49d0) 00:34:05.851 [2024-07-14 09:43:50.219675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:05.851 [2024-07-14 09:43:50.219695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:05.851 00:34:05.851 Latency(us) 00:34:05.851 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:05.851 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:05.851 nvme0n1 : 2.01 18656.29 72.88 0.00 0.00 6851.54 3155.44 18738.44 00:34:05.851 =================================================================================================================== 00:34:05.851 Total : 18656.29 72.88 0.00 0.00 6851.54 3155.44 18738.44 00:34:05.851 0 00:34:05.851 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:05.851 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:05.851 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:05.851 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:05.851 | .driver_specific 00:34:05.851 | .nvme_error 00:34:05.851 | .status_code 00:34:05.851 | .command_transient_transport_error' 00:34:06.108 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 146 > 0 )) 00:34:06.108 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 890378 00:34:06.108 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 890378 ']' 00:34:06.108 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 890378 00:34:06.108 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:34:06.108 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:06.108 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 890378 00:34:06.108 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:06.108 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:06.108 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 890378' 00:34:06.108 killing process with pid 890378 00:34:06.108 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 890378 00:34:06.108 Received shutdown signal, test time was about 2.000000 seconds 00:34:06.108 00:34:06.108 Latency(us) 00:34:06.108 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:06.108 =================================================================================================================== 00:34:06.108 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:06.108 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 890378 00:34:06.366 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:34:06.366 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:06.366 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:34:06.366 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:34:06.366 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:34:06.366 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=890791 00:34:06.366 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 890791 /var/tmp/bperf.sock 00:34:06.366 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:34:06.366 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 890791 ']' 00:34:06.366 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:06.366 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:06.366 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:06.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:06.366 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:06.366 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:06.366 [2024-07-14 09:43:50.774515] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:34:06.366 [2024-07-14 09:43:50.774597] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid890791 ] 00:34:06.366 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:06.366 Zero copy mechanism will not be used. 00:34:06.366 EAL: No free 2048 kB hugepages reported on node 1 00:34:06.624 [2024-07-14 09:43:50.844018] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:06.624 [2024-07-14 09:43:50.939052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:06.624 09:43:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:06.624 09:43:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:34:06.624 09:43:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:06.624 09:43:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:06.883 09:43:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:06.883 09:43:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:06.883 09:43:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:06.883 09:43:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:06.883 09:43:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:06.883 09:43:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:07.449 nvme0n1 00:34:07.449 09:43:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:34:07.449 09:43:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.449 09:43:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:07.449 09:43:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:07.449 09:43:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:07.449 09:43:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:07.449 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:07.449 Zero copy mechanism will not be used. 00:34:07.449 Running I/O for 2 seconds... 00:34:07.449 [2024-07-14 09:43:51.869699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:07.449 [2024-07-14 09:43:51.869747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.449 [2024-07-14 09:43:51.869766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:07.449 [2024-07-14 09:43:51.882762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:07.449 [2024-07-14 09:43:51.882808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.449 [2024-07-14 09:43:51.882825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:07.449 [2024-07-14 09:43:51.897500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:07.449 [2024-07-14 09:43:51.897548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.449 [2024-07-14 09:43:51.897566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:07.708 [2024-07-14 09:43:51.911564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:07.708 [2024-07-14 09:43:51.911617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.709 [2024-07-14 09:43:51.911634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:07.709 [2024-07-14 09:43:51.925952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:07.709 [2024-07-14 09:43:51.925986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.709 [2024-07-14 09:43:51.926003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:07.709 [2024-07-14 09:43:51.940350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:07.709 [2024-07-14 09:43:51.940381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.709 [2024-07-14 09:43:51.940397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:07.709 [2024-07-14 09:43:51.955150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:07.709 [2024-07-14 09:43:51.955196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.709 [2024-07-14 09:43:51.955212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:07.709 [2024-07-14 09:43:51.969909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:07.709 [2024-07-14 09:43:51.969939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.709 [2024-07-14 09:43:51.969956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:07.709 [2024-07-14 09:43:51.983008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:07.709 [2024-07-14 09:43:51.983039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.709 [2024-07-14 09:43:51.983056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:07.709 [2024-07-14 09:43:51.996098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:07.709 [2024-07-14 09:43:51.996129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.709 [2024-07-14 09:43:51.996147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:07.709 [2024-07-14 09:43:52.009421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:07.709 [2024-07-14 09:43:52.009450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.709 [2024-07-14 09:43:52.009467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:07.709 [2024-07-14 09:43:52.022506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:07.709 [2024-07-14 09:43:52.022534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.709 [2024-07-14 09:43:52.022550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:07.709 [2024-07-14 09:43:52.035387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:07.709 [2024-07-14 09:43:52.035417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.709 [2024-07-14 09:43:52.035432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:07.709 [2024-07-14 09:43:52.048498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:07.709 [2024-07-14 09:43:52.048541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.709 [2024-07-14 09:43:52.048557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:07.709 [2024-07-14 09:43:52.061396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:07.709 [2024-07-14 09:43:52.061424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.709 [2024-07-14 09:43:52.061439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:07.709 [2024-07-14 09:43:52.074384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:07.709 [2024-07-14 09:43:52.074412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.709 [2024-07-14 09:43:52.074427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:07.709 [2024-07-14 09:43:52.087293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:07.709 [2024-07-14 09:43:52.087321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.709 [2024-07-14 09:43:52.087337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:07.709 [2024-07-14 09:43:52.100366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:07.709 [2024-07-14 09:43:52.100393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.709 [2024-07-14 09:43:52.100409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:07.709 [2024-07-14 09:43:52.113429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:07.709 [2024-07-14 09:43:52.113457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.709 [2024-07-14 09:43:52.113472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:07.709 [2024-07-14 09:43:52.126550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:07.709 [2024-07-14 09:43:52.126593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.709 [2024-07-14 09:43:52.126610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:07.709 [2024-07-14 09:43:52.139602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:07.709 [2024-07-14 09:43:52.139630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.709 [2024-07-14 09:43:52.139655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:07.709 [2024-07-14 09:43:52.153186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:07.709 [2024-07-14 09:43:52.153229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.709 [2024-07-14 09:43:52.153248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:07.968 [2024-07-14 09:43:52.167245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:07.968 [2024-07-14 09:43:52.167276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.968 [2024-07-14 09:43:52.167292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:07.968 [2024-07-14 09:43:52.180410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:07.968 [2024-07-14 09:43:52.180456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.968 [2024-07-14 09:43:52.180473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:07.968 [2024-07-14 09:43:52.193512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:07.968 [2024-07-14 09:43:52.193542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.968 [2024-07-14 09:43:52.193559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:07.968 [2024-07-14 09:43:52.207005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:07.968 [2024-07-14 09:43:52.207034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.968 [2024-07-14 09:43:52.207050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:07.968 [2024-07-14 09:43:52.220156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:07.968 [2024-07-14 09:43:52.220185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.968 [2024-07-14 09:43:52.220202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:07.968 [2024-07-14 09:43:52.233228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:07.968 [2024-07-14 09:43:52.233256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.968 [2024-07-14 09:43:52.233273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:07.968 [2024-07-14 09:43:52.246584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:07.968 [2024-07-14 09:43:52.246627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.968 [2024-07-14 09:43:52.246644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:07.968 [2024-07-14 09:43:52.259792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:07.968 [2024-07-14 09:43:52.259830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.968 [2024-07-14 09:43:52.259847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:07.968 [2024-07-14 09:43:52.272881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:07.968 [2024-07-14 09:43:52.272910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.968 [2024-07-14 09:43:52.272927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:07.968 [2024-07-14 09:43:52.286139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:07.968 [2024-07-14 09:43:52.286184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.968 [2024-07-14 09:43:52.286201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:07.968 [2024-07-14 09:43:52.299371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:07.968 [2024-07-14 09:43:52.299401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.968 [2024-07-14 09:43:52.299418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:07.968 [2024-07-14 09:43:52.312481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:07.968 [2024-07-14 09:43:52.312511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.968 [2024-07-14 09:43:52.312527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:07.968 [2024-07-14 09:43:52.325716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:07.968 [2024-07-14 09:43:52.325746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.968 [2024-07-14 09:43:52.325762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:07.968 [2024-07-14 09:43:52.338988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:07.968 [2024-07-14 09:43:52.339018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.968 [2024-07-14 09:43:52.339035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:07.968 [2024-07-14 09:43:52.352128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:07.968 [2024-07-14 09:43:52.352158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.968 [2024-07-14 09:43:52.352191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:07.968 [2024-07-14 09:43:52.365738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:07.968 [2024-07-14 09:43:52.365768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.968 [2024-07-14 09:43:52.365785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:07.968 [2024-07-14 09:43:52.379103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:07.968 [2024-07-14 09:43:52.379133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.968 [2024-07-14 09:43:52.379150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:07.968 [2024-07-14 09:43:52.392522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:07.968 [2024-07-14 09:43:52.392552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.968 [2024-07-14 09:43:52.392584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:07.968 [2024-07-14 09:43:52.405678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:07.968 [2024-07-14 09:43:52.405709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:07.968 [2024-07-14 09:43:52.405725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.227 [2024-07-14 09:43:52.421587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.227 [2024-07-14 09:43:52.421621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.227 [2024-07-14 09:43:52.421638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.227 [2024-07-14 09:43:52.435137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.227 [2024-07-14 09:43:52.435184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.227 [2024-07-14 09:43:52.435201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.227 [2024-07-14 09:43:52.448233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.227 [2024-07-14 09:43:52.448277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.227 [2024-07-14 09:43:52.448294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.227 [2024-07-14 09:43:52.461406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.227 [2024-07-14 09:43:52.461436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.227 [2024-07-14 09:43:52.461453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.227 [2024-07-14 09:43:52.474500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.227 [2024-07-14 09:43:52.474545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.227 [2024-07-14 09:43:52.474562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.227 [2024-07-14 09:43:52.487408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.227 [2024-07-14 09:43:52.487461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.227 [2024-07-14 09:43:52.487479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.227 [2024-07-14 09:43:52.500718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.227 [2024-07-14 09:43:52.500751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.227 [2024-07-14 09:43:52.500771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.227 [2024-07-14 09:43:52.514118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.227 [2024-07-14 09:43:52.514149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.227 [2024-07-14 09:43:52.514166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.227 [2024-07-14 09:43:52.527185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.227 [2024-07-14 09:43:52.527229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.227 [2024-07-14 09:43:52.527246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.227 [2024-07-14 09:43:52.540516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.227 [2024-07-14 09:43:52.540562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.227 [2024-07-14 09:43:52.540579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.227 [2024-07-14 09:43:52.554755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.227 [2024-07-14 09:43:52.554802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.227 [2024-07-14 09:43:52.554819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.227 [2024-07-14 09:43:52.569719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.227 [2024-07-14 09:43:52.569755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.227 [2024-07-14 09:43:52.569775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.227 [2024-07-14 09:43:52.583596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.227 [2024-07-14 09:43:52.583641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.227 [2024-07-14 09:43:52.583658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.227 [2024-07-14 09:43:52.597546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.227 [2024-07-14 09:43:52.597579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.227 [2024-07-14 09:43:52.597596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.227 [2024-07-14 09:43:52.610500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.227 [2024-07-14 09:43:52.610530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.227 [2024-07-14 09:43:52.610547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.227 [2024-07-14 09:43:52.624731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.227 [2024-07-14 09:43:52.624767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.227 [2024-07-14 09:43:52.624786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.227 [2024-07-14 09:43:52.640122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.227 [2024-07-14 09:43:52.640153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.227 [2024-07-14 09:43:52.640170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.227 [2024-07-14 09:43:52.654735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.227 [2024-07-14 09:43:52.654767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.227 [2024-07-14 09:43:52.654784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.227 [2024-07-14 09:43:52.669762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.227 [2024-07-14 09:43:52.669795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.227 [2024-07-14 09:43:52.669813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.487 [2024-07-14 09:43:52.683150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.487 [2024-07-14 09:43:52.683197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.487 [2024-07-14 09:43:52.683214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.487 [2024-07-14 09:43:52.697959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.487 [2024-07-14 09:43:52.697992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.487 [2024-07-14 09:43:52.698010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.487 [2024-07-14 09:43:52.712743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.487 [2024-07-14 09:43:52.712774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.487 [2024-07-14 09:43:52.712791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.487 [2024-07-14 09:43:52.726768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.487 [2024-07-14 09:43:52.726814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.487 [2024-07-14 09:43:52.726840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.487 [2024-07-14 09:43:52.740669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.487 [2024-07-14 09:43:52.740700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.487 [2024-07-14 09:43:52.740717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.487 [2024-07-14 09:43:52.754818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.487 [2024-07-14 09:43:52.754877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.487 [2024-07-14 09:43:52.754901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.487 [2024-07-14 09:43:52.768949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.487 [2024-07-14 09:43:52.768980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.487 [2024-07-14 09:43:52.768997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.487 [2024-07-14 09:43:52.782976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.487 [2024-07-14 09:43:52.783008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.487 [2024-07-14 09:43:52.783025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.487 [2024-07-14 09:43:52.796984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.487 [2024-07-14 09:43:52.797017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.487 [2024-07-14 09:43:52.797035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.487 [2024-07-14 09:43:52.812016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.487 [2024-07-14 09:43:52.812049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.487 [2024-07-14 09:43:52.812066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.487 [2024-07-14 09:43:52.826270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.487 [2024-07-14 09:43:52.826318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.487 [2024-07-14 09:43:52.826335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.487 [2024-07-14 09:43:52.840359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.487 [2024-07-14 09:43:52.840390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.487 [2024-07-14 09:43:52.840407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.487 [2024-07-14 09:43:52.854226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.487 [2024-07-14 09:43:52.854265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.487 [2024-07-14 09:43:52.854299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.487 [2024-07-14 09:43:52.868446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.487 [2024-07-14 09:43:52.868478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.487 [2024-07-14 09:43:52.868495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.487 [2024-07-14 09:43:52.882233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.487 [2024-07-14 09:43:52.882263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.487 [2024-07-14 09:43:52.882279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.487 [2024-07-14 09:43:52.896391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.487 [2024-07-14 09:43:52.896423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.487 [2024-07-14 09:43:52.896441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.487 [2024-07-14 09:43:52.911025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.487 [2024-07-14 09:43:52.911056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.487 [2024-07-14 09:43:52.911073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.487 [2024-07-14 09:43:52.925418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.487 [2024-07-14 09:43:52.925468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.487 [2024-07-14 09:43:52.925486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.747 [2024-07-14 09:43:52.939674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.747 [2024-07-14 09:43:52.939710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.747 [2024-07-14 09:43:52.939730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.747 [2024-07-14 09:43:52.954148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.747 [2024-07-14 09:43:52.954194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.747 [2024-07-14 09:43:52.954211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.747 [2024-07-14 09:43:52.967762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.747 [2024-07-14 09:43:52.967791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.747 [2024-07-14 09:43:52.967808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.747 [2024-07-14 09:43:52.982059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.747 [2024-07-14 09:43:52.982105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.747 [2024-07-14 09:43:52.982122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.747 [2024-07-14 09:43:52.995523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.747 [2024-07-14 09:43:52.995573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.747 [2024-07-14 09:43:52.995590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.747 [2024-07-14 09:43:53.010029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.747 [2024-07-14 09:43:53.010076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.747 [2024-07-14 09:43:53.010094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.747 [2024-07-14 09:43:53.023630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.747 [2024-07-14 09:43:53.023676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.747 [2024-07-14 09:43:53.023693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.747 [2024-07-14 09:43:53.038212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.747 [2024-07-14 09:43:53.038244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.747 [2024-07-14 09:43:53.038261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.747 [2024-07-14 09:43:53.052353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.747 [2024-07-14 09:43:53.052385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.747 [2024-07-14 09:43:53.052424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.747 [2024-07-14 09:43:53.066816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.747 [2024-07-14 09:43:53.066849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.747 [2024-07-14 09:43:53.066892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.747 [2024-07-14 09:43:53.081003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.747 [2024-07-14 09:43:53.081035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.747 [2024-07-14 09:43:53.081052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.747 [2024-07-14 09:43:53.094432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.747 [2024-07-14 09:43:53.094480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.747 [2024-07-14 09:43:53.094504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.747 [2024-07-14 09:43:53.108115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.747 [2024-07-14 09:43:53.108146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.747 [2024-07-14 09:43:53.108163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.747 [2024-07-14 09:43:53.123001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.747 [2024-07-14 09:43:53.123034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.747 [2024-07-14 09:43:53.123051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.747 [2024-07-14 09:43:53.137015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.747 [2024-07-14 09:43:53.137047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.747 [2024-07-14 09:43:53.137064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.747 [2024-07-14 09:43:53.150801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.747 [2024-07-14 09:43:53.150831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.747 [2024-07-14 09:43:53.150863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:08.747 [2024-07-14 09:43:53.164471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.747 [2024-07-14 09:43:53.164505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.747 [2024-07-14 09:43:53.164522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:08.747 [2024-07-14 09:43:53.179060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.747 [2024-07-14 09:43:53.179091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.747 [2024-07-14 09:43:53.179108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:08.747 [2024-07-14 09:43:53.193160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:08.747 [2024-07-14 09:43:53.193208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.747 [2024-07-14 09:43:53.193224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.004 [2024-07-14 09:43:53.206672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:09.004 [2024-07-14 09:43:53.206718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.004 [2024-07-14 09:43:53.206734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.004 [2024-07-14 09:43:53.221390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:09.004 [2024-07-14 09:43:53.221436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.004 [2024-07-14 09:43:53.221452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.004 [2024-07-14 09:43:53.235826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:09.004 [2024-07-14 09:43:53.235857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.004 [2024-07-14 09:43:53.235896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.004 [2024-07-14 09:43:53.249282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:09.004 [2024-07-14 09:43:53.249326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.004 [2024-07-14 09:43:53.249343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.004 [2024-07-14 09:43:53.262645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:09.004 [2024-07-14 09:43:53.262688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.004 [2024-07-14 09:43:53.262705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.004 [2024-07-14 09:43:53.275727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:09.004 [2024-07-14 09:43:53.275770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.004 [2024-07-14 09:43:53.275787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.004 [2024-07-14 09:43:53.289468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:09.004 [2024-07-14 09:43:53.289499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.004 [2024-07-14 09:43:53.289516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.004 [2024-07-14 09:43:53.302653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:09.004 [2024-07-14 09:43:53.302697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.005 [2024-07-14 09:43:53.302714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.005 [2024-07-14 09:43:53.315815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:09.005 [2024-07-14 09:43:53.315858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.005 [2024-07-14 09:43:53.315898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.005 [2024-07-14 09:43:53.329831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:09.005 [2024-07-14 09:43:53.329859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.005 [2024-07-14 09:43:53.329913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.005 [2024-07-14 09:43:53.343130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:09.005 [2024-07-14 09:43:53.343159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.005 [2024-07-14 09:43:53.343176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.005 [2024-07-14 09:43:53.356767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:09.005 [2024-07-14 09:43:53.356795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.005 [2024-07-14 09:43:53.356811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.005 [2024-07-14 09:43:53.370079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:09.005 [2024-07-14 09:43:53.370108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.005 [2024-07-14 09:43:53.370124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.005 [2024-07-14 09:43:53.383198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:09.005 [2024-07-14 09:43:53.383239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.005 [2024-07-14 09:43:53.383255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.005 [2024-07-14 09:43:53.396587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:09.005 [2024-07-14 09:43:53.396631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.005 [2024-07-14 09:43:53.396647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.005 [2024-07-14 09:43:53.409871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:09.005 [2024-07-14 09:43:53.409901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.005 [2024-07-14 09:43:53.409917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.005 [2024-07-14 09:43:53.422890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:09.005 [2024-07-14 09:43:53.422930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.005 [2024-07-14 09:43:53.422946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.005 [2024-07-14 09:43:53.435899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:09.005 [2024-07-14 09:43:53.435953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.005 [2024-07-14 09:43:53.435972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.005 [2024-07-14 09:43:53.449125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:09.005 [2024-07-14 09:43:53.449159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.005 [2024-07-14 09:43:53.449176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.262 [2024-07-14 09:43:53.462244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:09.262 [2024-07-14 09:43:53.462274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.262 [2024-07-14 09:43:53.462290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.262 [2024-07-14 09:43:53.475381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:09.262 [2024-07-14 09:43:53.475425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.262 [2024-07-14 09:43:53.475441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.262 [2024-07-14 09:43:53.489266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:09.262 [2024-07-14 09:43:53.489295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.262 [2024-07-14 09:43:53.489311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.262 [2024-07-14 09:43:53.502860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:09.262 [2024-07-14 09:43:53.502918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.262 [2024-07-14 09:43:53.502934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.262 [2024-07-14 09:43:53.516525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:09.262 [2024-07-14 09:43:53.516567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.262 [2024-07-14 09:43:53.516583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.262 [2024-07-14 09:43:53.530169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:09.262 [2024-07-14 09:43:53.530196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.262 [2024-07-14 09:43:53.530212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.262 [2024-07-14 09:43:53.544345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:09.262 [2024-07-14 09:43:53.544378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.262 [2024-07-14 09:43:53.544396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.262 [2024-07-14 09:43:53.557621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:09.262 [2024-07-14 09:43:53.557648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.262 [2024-07-14 09:43:53.557663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.262 [2024-07-14 09:43:53.571034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:09.262 [2024-07-14 09:43:53.571061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.262 [2024-07-14 09:43:53.571076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.262 [2024-07-14 09:43:53.584610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:09.262 [2024-07-14 09:43:53.584653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.262 [2024-07-14 09:43:53.584669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.262 [2024-07-14 09:43:53.597720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:09.262 [2024-07-14 09:43:53.597761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.262 [2024-07-14 09:43:53.597777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.262 [2024-07-14 09:43:53.611158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:09.262 [2024-07-14 09:43:53.611184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.262 [2024-07-14 09:43:53.611200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.262 [2024-07-14 09:43:53.625073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:09.262 [2024-07-14 09:43:53.625100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.262 [2024-07-14 09:43:53.625116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.262 [2024-07-14 09:43:53.638527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:09.262 [2024-07-14 09:43:53.638554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.262 [2024-07-14 09:43:53.638570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.262 [2024-07-14 09:43:53.651708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:09.262 [2024-07-14 09:43:53.651751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.262 [2024-07-14 09:43:53.651767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.262 [2024-07-14 09:43:53.665152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:09.262 [2024-07-14 09:43:53.665179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.262 [2024-07-14 09:43:53.665194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.262 [2024-07-14 09:43:53.678215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:09.262 [2024-07-14 09:43:53.678242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.262 [2024-07-14 09:43:53.678263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.262 [2024-07-14 09:43:53.691735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:09.262 [2024-07-14 09:43:53.691763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.262 [2024-07-14 09:43:53.691778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.262 [2024-07-14 09:43:53.705302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:09.262 [2024-07-14 09:43:53.705330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.262 [2024-07-14 09:43:53.705345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.519 [2024-07-14 09:43:53.718608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:09.519 [2024-07-14 09:43:53.718640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.519 [2024-07-14 09:43:53.718658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.519 [2024-07-14 09:43:53.731929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:09.519 [2024-07-14 09:43:53.731957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.519 [2024-07-14 09:43:53.731972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.519 [2024-07-14 09:43:53.745531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:09.519 [2024-07-14 09:43:53.745575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.519 [2024-07-14 09:43:53.745591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.519 [2024-07-14 09:43:53.758928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:09.519 [2024-07-14 09:43:53.758955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.519 [2024-07-14 09:43:53.758970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.519 [2024-07-14 09:43:53.772102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:09.519 [2024-07-14 09:43:53.772129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.519 [2024-07-14 09:43:53.772144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.519 [2024-07-14 09:43:53.785072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:09.519 [2024-07-14 09:43:53.785099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.519 [2024-07-14 09:43:53.785114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.519 [2024-07-14 09:43:53.798440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:09.519 [2024-07-14 09:43:53.798468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.519 [2024-07-14 09:43:53.798484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.519 [2024-07-14 09:43:53.811580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:09.519 [2024-07-14 09:43:53.811608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.519 [2024-07-14 09:43:53.811624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:09.519 [2024-07-14 09:43:53.825265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:09.519 [2024-07-14 09:43:53.825293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.519 [2024-07-14 09:43:53.825309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:09.519 [2024-07-14 09:43:53.838949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:09.519 [2024-07-14 09:43:53.838978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.519 [2024-07-14 09:43:53.838995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:09.519 [2024-07-14 09:43:53.852450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbde3d0) 00:34:09.519 [2024-07-14 09:43:53.852483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.519 [2024-07-14 09:43:53.852502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:09.519 00:34:09.519 Latency(us) 00:34:09.519 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:09.519 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:34:09.519 nvme0n1 : 2.00 2270.57 283.82 0.00 0.00 7042.08 6019.60 15825.73 00:34:09.519 =================================================================================================================== 00:34:09.520 Total : 2270.57 283.82 0.00 0.00 7042.08 6019.60 15825.73 00:34:09.520 0 00:34:09.520 09:43:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:09.520 09:43:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:09.520 09:43:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:09.520 09:43:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:09.520 | .driver_specific 00:34:09.520 | .nvme_error 00:34:09.520 | .status_code 00:34:09.520 | .command_transient_transport_error' 00:34:09.777 09:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 146 > 0 )) 00:34:09.777 09:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 890791 00:34:09.777 09:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 890791 ']' 00:34:09.777 09:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 890791 00:34:09.777 09:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:34:09.777 09:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:09.777 09:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 890791 00:34:09.777 09:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:09.777 09:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:09.777 09:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 890791' 00:34:09.777 killing process with pid 890791 00:34:09.777 09:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 890791 00:34:09.777 Received shutdown signal, test time was about 2.000000 seconds 00:34:09.777 00:34:09.777 Latency(us) 00:34:09.777 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:09.777 =================================================================================================================== 00:34:09.777 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:09.777 09:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 890791 00:34:10.035 09:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:34:10.035 09:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:10.035 09:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:34:10.035 09:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:34:10.035 09:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:34:10.035 09:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=891306 00:34:10.035 09:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:34:10.035 09:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 891306 /var/tmp/bperf.sock 00:34:10.035 09:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 891306 ']' 00:34:10.035 09:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:10.035 09:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:10.035 09:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:10.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:10.035 09:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:10.035 09:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:10.035 [2024-07-14 09:43:54.417912] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:34:10.035 [2024-07-14 09:43:54.417992] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid891306 ] 00:34:10.035 EAL: No free 2048 kB hugepages reported on node 1 00:34:10.036 [2024-07-14 09:43:54.481197] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:10.294 [2024-07-14 09:43:54.571955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:10.294 09:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:10.294 09:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:34:10.294 09:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:10.294 09:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:10.552 09:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:10.552 09:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.552 09:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:10.552 09:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.552 09:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:10.552 09:43:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:11.118 nvme0n1 00:34:11.118 09:43:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:34:11.118 09:43:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.118 09:43:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:11.118 09:43:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.118 09:43:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:11.118 09:43:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:11.118 Running I/O for 2 seconds... 00:34:11.118 [2024-07-14 09:43:55.521941] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.118 [2024-07-14 09:43:55.522186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.118 [2024-07-14 09:43:55.522224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.118 [2024-07-14 09:43:55.536182] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.118 [2024-07-14 09:43:55.536481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:10584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.118 [2024-07-14 09:43:55.536510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.118 [2024-07-14 09:43:55.550427] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.118 [2024-07-14 09:43:55.550676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.118 [2024-07-14 09:43:55.550706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.118 [2024-07-14 09:43:55.564720] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.118 [2024-07-14 09:43:55.565068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.118 [2024-07-14 09:43:55.565097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.376 [2024-07-14 09:43:55.579591] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.376 [2024-07-14 09:43:55.579838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.376 [2024-07-14 09:43:55.579872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.376 [2024-07-14 09:43:55.593658] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.377 [2024-07-14 09:43:55.593909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.377 [2024-07-14 09:43:55.593951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.377 [2024-07-14 09:43:55.607738] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.377 [2024-07-14 09:43:55.608015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.377 [2024-07-14 09:43:55.608043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.377 [2024-07-14 09:43:55.621784] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.377 [2024-07-14 09:43:55.622036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.377 [2024-07-14 09:43:55.622065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.377 [2024-07-14 09:43:55.635862] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.377 [2024-07-14 09:43:55.636111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.377 [2024-07-14 09:43:55.636138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.377 [2024-07-14 09:43:55.649878] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.377 [2024-07-14 09:43:55.650231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.377 [2024-07-14 09:43:55.650258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.377 [2024-07-14 09:43:55.663791] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.377 [2024-07-14 09:43:55.664044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.377 [2024-07-14 09:43:55.664072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.377 [2024-07-14 09:43:55.677883] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.377 [2024-07-14 09:43:55.678118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.377 [2024-07-14 09:43:55.678160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.377 [2024-07-14 09:43:55.691988] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.377 [2024-07-14 09:43:55.692298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:14426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.377 [2024-07-14 09:43:55.692325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.377 [2024-07-14 09:43:55.706045] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.377 [2024-07-14 09:43:55.706277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.377 [2024-07-14 09:43:55.706309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.377 [2024-07-14 09:43:55.720287] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.377 [2024-07-14 09:43:55.720533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.377 [2024-07-14 09:43:55.720560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.377 [2024-07-14 09:43:55.734212] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.377 [2024-07-14 09:43:55.734513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.377 [2024-07-14 09:43:55.734540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.377 [2024-07-14 09:43:55.748175] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.377 [2024-07-14 09:43:55.748551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.377 [2024-07-14 09:43:55.748578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.377 [2024-07-14 09:43:55.762246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.377 [2024-07-14 09:43:55.762644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.377 [2024-07-14 09:43:55.762686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.377 [2024-07-14 09:43:55.776253] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.377 [2024-07-14 09:43:55.776604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:42 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.377 [2024-07-14 09:43:55.776632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.377 [2024-07-14 09:43:55.790233] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.377 [2024-07-14 09:43:55.790508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.377 [2024-07-14 09:43:55.790536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.377 [2024-07-14 09:43:55.804055] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.377 [2024-07-14 09:43:55.804275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.377 [2024-07-14 09:43:55.804302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.377 [2024-07-14 09:43:55.818024] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.377 [2024-07-14 09:43:55.818264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.377 [2024-07-14 09:43:55.818291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.635 [2024-07-14 09:43:55.832598] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.635 [2024-07-14 09:43:55.832936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.635 [2024-07-14 09:43:55.832963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.635 [2024-07-14 09:43:55.846614] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.635 [2024-07-14 09:43:55.846891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.635 [2024-07-14 09:43:55.846934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.635 [2024-07-14 09:43:55.860675] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.635 [2024-07-14 09:43:55.860918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.635 [2024-07-14 09:43:55.860960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.635 [2024-07-14 09:43:55.874607] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.635 [2024-07-14 09:43:55.874884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.635 [2024-07-14 09:43:55.874927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.635 [2024-07-14 09:43:55.888594] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.635 [2024-07-14 09:43:55.888877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.635 [2024-07-14 09:43:55.888905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.636 [2024-07-14 09:43:55.902450] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.636 [2024-07-14 09:43:55.902718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.636 [2024-07-14 09:43:55.902748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.636 [2024-07-14 09:43:55.916404] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.636 [2024-07-14 09:43:55.916675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.636 [2024-07-14 09:43:55.916702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.636 [2024-07-14 09:43:55.930331] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.636 [2024-07-14 09:43:55.930609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.636 [2024-07-14 09:43:55.930635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.636 [2024-07-14 09:43:55.944302] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.636 [2024-07-14 09:43:55.944659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.636 [2024-07-14 09:43:55.944685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.636 [2024-07-14 09:43:55.958275] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.636 [2024-07-14 09:43:55.958558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.636 [2024-07-14 09:43:55.958599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.636 [2024-07-14 09:43:55.972213] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.636 [2024-07-14 09:43:55.972508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.636 [2024-07-14 09:43:55.972535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.636 [2024-07-14 09:43:55.986028] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.636 [2024-07-14 09:43:55.986306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.636 [2024-07-14 09:43:55.986333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.636 [2024-07-14 09:43:55.999971] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.636 [2024-07-14 09:43:56.000282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.636 [2024-07-14 09:43:56.000308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.636 [2024-07-14 09:43:56.013908] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.636 [2024-07-14 09:43:56.014258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.636 [2024-07-14 09:43:56.014284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.636 [2024-07-14 09:43:56.027980] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.636 [2024-07-14 09:43:56.028232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.636 [2024-07-14 09:43:56.028273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.636 [2024-07-14 09:43:56.041896] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.636 [2024-07-14 09:43:56.042141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.636 [2024-07-14 09:43:56.042168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.636 [2024-07-14 09:43:56.055814] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.636 [2024-07-14 09:43:56.056098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.636 [2024-07-14 09:43:56.056125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.636 [2024-07-14 09:43:56.069721] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.636 [2024-07-14 09:43:56.070002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.636 [2024-07-14 09:43:56.070033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.636 [2024-07-14 09:43:56.083540] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.636 [2024-07-14 09:43:56.083838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.636 [2024-07-14 09:43:56.083871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.893 [2024-07-14 09:43:56.098134] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.893 [2024-07-14 09:43:56.098506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.893 [2024-07-14 09:43:56.098532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.893 [2024-07-14 09:43:56.112080] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.893 [2024-07-14 09:43:56.112348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.893 [2024-07-14 09:43:56.112389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.893 [2024-07-14 09:43:56.126132] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.893 [2024-07-14 09:43:56.126493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.893 [2024-07-14 09:43:56.126520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.893 [2024-07-14 09:43:56.140278] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.893 [2024-07-14 09:43:56.140531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.893 [2024-07-14 09:43:56.140559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.893 [2024-07-14 09:43:56.154218] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.893 [2024-07-14 09:43:56.154559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.893 [2024-07-14 09:43:56.154586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.893 [2024-07-14 09:43:56.168242] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.893 [2024-07-14 09:43:56.168492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.893 [2024-07-14 09:43:56.168520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.893 [2024-07-14 09:43:56.182233] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.893 [2024-07-14 09:43:56.182583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.893 [2024-07-14 09:43:56.182610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.893 [2024-07-14 09:43:56.196334] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.893 [2024-07-14 09:43:56.196581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.893 [2024-07-14 09:43:56.196609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.893 [2024-07-14 09:43:56.210300] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.893 [2024-07-14 09:43:56.210548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.893 [2024-07-14 09:43:56.210575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.893 [2024-07-14 09:43:56.224318] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.893 [2024-07-14 09:43:56.224595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.893 [2024-07-14 09:43:56.224636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.893 [2024-07-14 09:43:56.238189] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.893 [2024-07-14 09:43:56.238546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.893 [2024-07-14 09:43:56.238573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.893 [2024-07-14 09:43:56.252116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.893 [2024-07-14 09:43:56.252349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.893 [2024-07-14 09:43:56.252375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.893 [2024-07-14 09:43:56.266042] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.893 [2024-07-14 09:43:56.266266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:3775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.893 [2024-07-14 09:43:56.266294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.893 [2024-07-14 09:43:56.279959] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.893 [2024-07-14 09:43:56.280217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:6207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.893 [2024-07-14 09:43:56.280245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.893 [2024-07-14 09:43:56.293884] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.893 [2024-07-14 09:43:56.294213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.893 [2024-07-14 09:43:56.294240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.893 [2024-07-14 09:43:56.307968] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.894 [2024-07-14 09:43:56.308255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.894 [2024-07-14 09:43:56.308301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.894 [2024-07-14 09:43:56.321935] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.894 [2024-07-14 09:43:56.322255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.894 [2024-07-14 09:43:56.322282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:11.894 [2024-07-14 09:43:56.335976] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:11.894 [2024-07-14 09:43:56.336245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:11.894 [2024-07-14 09:43:56.336273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.151 [2024-07-14 09:43:56.350548] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.151 [2024-07-14 09:43:56.350908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.151 [2024-07-14 09:43:56.350935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.151 [2024-07-14 09:43:56.364494] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.151 [2024-07-14 09:43:56.364789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.151 [2024-07-14 09:43:56.364815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.151 [2024-07-14 09:43:56.378425] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.151 [2024-07-14 09:43:56.378781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.151 [2024-07-14 09:43:56.378809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.151 [2024-07-14 09:43:56.392353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.151 [2024-07-14 09:43:56.392644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.151 [2024-07-14 09:43:56.392673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.151 [2024-07-14 09:43:56.406330] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.151 [2024-07-14 09:43:56.406576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.151 [2024-07-14 09:43:56.406603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.151 [2024-07-14 09:43:56.420334] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.151 [2024-07-14 09:43:56.420578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.151 [2024-07-14 09:43:56.420606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.151 [2024-07-14 09:43:56.434299] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.151 [2024-07-14 09:43:56.434645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.151 [2024-07-14 09:43:56.434678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.151 [2024-07-14 09:43:56.448216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.151 [2024-07-14 09:43:56.448463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.151 [2024-07-14 09:43:56.448490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.151 [2024-07-14 09:43:56.462155] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.151 [2024-07-14 09:43:56.462378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.151 [2024-07-14 09:43:56.462405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.151 [2024-07-14 09:43:56.476083] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.151 [2024-07-14 09:43:56.476319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.151 [2024-07-14 09:43:56.476345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.151 [2024-07-14 09:43:56.490064] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.151 [2024-07-14 09:43:56.490325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.151 [2024-07-14 09:43:56.490352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.151 [2024-07-14 09:43:56.503879] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.151 [2024-07-14 09:43:56.504129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.151 [2024-07-14 09:43:56.504157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.151 [2024-07-14 09:43:56.517783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.151 [2024-07-14 09:43:56.518069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.151 [2024-07-14 09:43:56.518097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.151 [2024-07-14 09:43:56.531813] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.151 [2024-07-14 09:43:56.532088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.151 [2024-07-14 09:43:56.532115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.151 [2024-07-14 09:43:56.545644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.151 [2024-07-14 09:43:56.545890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.151 [2024-07-14 09:43:56.545928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.151 [2024-07-14 09:43:56.559552] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.151 [2024-07-14 09:43:56.559794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.151 [2024-07-14 09:43:56.559829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.151 [2024-07-14 09:43:56.573387] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.151 [2024-07-14 09:43:56.573695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.151 [2024-07-14 09:43:56.573726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.151 [2024-07-14 09:43:56.587247] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.151 [2024-07-14 09:43:56.587517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.151 [2024-07-14 09:43:56.587544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.151 [2024-07-14 09:43:56.601251] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.151 [2024-07-14 09:43:56.601575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:11164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.151 [2024-07-14 09:43:56.601602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.409 [2024-07-14 09:43:56.615588] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.409 [2024-07-14 09:43:56.615901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:14772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.409 [2024-07-14 09:43:56.615929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.409 [2024-07-14 09:43:56.629451] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.409 [2024-07-14 09:43:56.629720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.409 [2024-07-14 09:43:56.629763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.409 [2024-07-14 09:43:56.643347] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.409 [2024-07-14 09:43:56.643635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.409 [2024-07-14 09:43:56.643661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.409 [2024-07-14 09:43:56.657110] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.409 [2024-07-14 09:43:56.657369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.409 [2024-07-14 09:43:56.657396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.409 [2024-07-14 09:43:56.671026] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.409 [2024-07-14 09:43:56.671280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.409 [2024-07-14 09:43:56.671322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.409 [2024-07-14 09:43:56.685019] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.409 [2024-07-14 09:43:56.685313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.409 [2024-07-14 09:43:56.685340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.409 [2024-07-14 09:43:56.699045] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.409 [2024-07-14 09:43:56.699267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.409 [2024-07-14 09:43:56.699295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.409 [2024-07-14 09:43:56.713126] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.409 [2024-07-14 09:43:56.713394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.409 [2024-07-14 09:43:56.713423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.409 [2024-07-14 09:43:56.727360] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.410 [2024-07-14 09:43:56.727700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.410 [2024-07-14 09:43:56.727728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.410 [2024-07-14 09:43:56.741411] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.410 [2024-07-14 09:43:56.741656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.410 [2024-07-14 09:43:56.741683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.410 [2024-07-14 09:43:56.755315] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.410 [2024-07-14 09:43:56.755564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.410 [2024-07-14 09:43:56.755591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.410 [2024-07-14 09:43:56.769376] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.410 [2024-07-14 09:43:56.769622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.410 [2024-07-14 09:43:56.769649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.410 [2024-07-14 09:43:56.783342] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.410 [2024-07-14 09:43:56.783615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.410 [2024-07-14 09:43:56.783642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.410 [2024-07-14 09:43:56.797211] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.410 [2024-07-14 09:43:56.797456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.410 [2024-07-14 09:43:56.797488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.410 [2024-07-14 09:43:56.811017] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.410 [2024-07-14 09:43:56.811234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.410 [2024-07-14 09:43:56.811261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.410 [2024-07-14 09:43:56.824877] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.410 [2024-07-14 09:43:56.825234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.410 [2024-07-14 09:43:56.825260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.410 [2024-07-14 09:43:56.838775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.410 [2024-07-14 09:43:56.839060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.410 [2024-07-14 09:43:56.839087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.410 [2024-07-14 09:43:56.852736] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.410 [2024-07-14 09:43:56.853026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.410 [2024-07-14 09:43:56.853053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.669 [2024-07-14 09:43:56.867229] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.669 [2024-07-14 09:43:56.867581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.669 [2024-07-14 09:43:56.867607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.669 [2024-07-14 09:43:56.881154] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.669 [2024-07-14 09:43:56.881439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.669 [2024-07-14 09:43:56.881481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.669 [2024-07-14 09:43:56.895096] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.669 [2024-07-14 09:43:56.895319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:6870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.669 [2024-07-14 09:43:56.895346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.669 [2024-07-14 09:43:56.909067] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.669 [2024-07-14 09:43:56.909333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.669 [2024-07-14 09:43:56.909375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.669 [2024-07-14 09:43:56.923002] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.669 [2024-07-14 09:43:56.923329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.669 [2024-07-14 09:43:56.923359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.669 [2024-07-14 09:43:56.937004] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.669 [2024-07-14 09:43:56.937274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.669 [2024-07-14 09:43:56.937300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.669 [2024-07-14 09:43:56.950943] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.669 [2024-07-14 09:43:56.951259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.669 [2024-07-14 09:43:56.951285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.669 [2024-07-14 09:43:56.964999] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.669 [2024-07-14 09:43:56.965266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.669 [2024-07-14 09:43:56.965291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.669 [2024-07-14 09:43:56.978955] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.669 [2024-07-14 09:43:56.979261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.670 [2024-07-14 09:43:56.979287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.670 [2024-07-14 09:43:56.992824] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.670 [2024-07-14 09:43:56.993107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.670 [2024-07-14 09:43:56.993134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.670 [2024-07-14 09:43:57.006827] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.670 [2024-07-14 09:43:57.007130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.670 [2024-07-14 09:43:57.007156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.670 [2024-07-14 09:43:57.020791] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.670 [2024-07-14 09:43:57.021073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.670 [2024-07-14 09:43:57.021100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.670 [2024-07-14 09:43:57.034693] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.670 [2024-07-14 09:43:57.034985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.670 [2024-07-14 09:43:57.035013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.670 [2024-07-14 09:43:57.048395] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.670 [2024-07-14 09:43:57.048679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.670 [2024-07-14 09:43:57.048706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.670 [2024-07-14 09:43:57.062254] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.670 [2024-07-14 09:43:57.062531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.670 [2024-07-14 09:43:57.062558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.670 [2024-07-14 09:43:57.076140] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.670 [2024-07-14 09:43:57.076362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.670 [2024-07-14 09:43:57.076389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.670 [2024-07-14 09:43:57.090081] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.670 [2024-07-14 09:43:57.090358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.670 [2024-07-14 09:43:57.090386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.670 [2024-07-14 09:43:57.104029] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.670 [2024-07-14 09:43:57.104251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.670 [2024-07-14 09:43:57.104278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.670 [2024-07-14 09:43:57.118153] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.670 [2024-07-14 09:43:57.118474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.670 [2024-07-14 09:43:57.118501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.961 [2024-07-14 09:43:57.135152] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.961 [2024-07-14 09:43:57.135434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.961 [2024-07-14 09:43:57.135462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.961 [2024-07-14 09:43:57.149066] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.961 [2024-07-14 09:43:57.149290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.961 [2024-07-14 09:43:57.149317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.961 [2024-07-14 09:43:57.162924] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.962 [2024-07-14 09:43:57.163148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.962 [2024-07-14 09:43:57.163176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.962 [2024-07-14 09:43:57.176783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.962 [2024-07-14 09:43:57.177066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.962 [2024-07-14 09:43:57.177094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.962 [2024-07-14 09:43:57.190773] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.962 [2024-07-14 09:43:57.191056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.962 [2024-07-14 09:43:57.191084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.962 [2024-07-14 09:43:57.204814] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.962 [2024-07-14 09:43:57.205178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.962 [2024-07-14 09:43:57.205204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.962 [2024-07-14 09:43:57.218777] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.962 [2024-07-14 09:43:57.219144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.962 [2024-07-14 09:43:57.219171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.962 [2024-07-14 09:43:57.232814] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.962 [2024-07-14 09:43:57.233183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:6887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.962 [2024-07-14 09:43:57.233209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.962 [2024-07-14 09:43:57.246700] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.962 [2024-07-14 09:43:57.246993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.962 [2024-07-14 09:43:57.247019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.962 [2024-07-14 09:43:57.260590] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.962 [2024-07-14 09:43:57.260863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.962 [2024-07-14 09:43:57.260914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.962 [2024-07-14 09:43:57.274503] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.962 [2024-07-14 09:43:57.274787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.962 [2024-07-14 09:43:57.274813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.962 [2024-07-14 09:43:57.288325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.962 [2024-07-14 09:43:57.288599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.962 [2024-07-14 09:43:57.288633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.962 [2024-07-14 09:43:57.302049] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.962 [2024-07-14 09:43:57.302270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.962 [2024-07-14 09:43:57.302296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.962 [2024-07-14 09:43:57.316038] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.962 [2024-07-14 09:43:57.316298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.962 [2024-07-14 09:43:57.316325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.962 [2024-07-14 09:43:57.329878] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.962 [2024-07-14 09:43:57.330148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.962 [2024-07-14 09:43:57.330174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.962 [2024-07-14 09:43:57.343825] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.962 [2024-07-14 09:43:57.344109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.962 [2024-07-14 09:43:57.344136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.962 [2024-07-14 09:43:57.357773] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.962 [2024-07-14 09:43:57.358127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.962 [2024-07-14 09:43:57.358154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.962 [2024-07-14 09:43:57.371723] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.962 [2024-07-14 09:43:57.371966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.962 [2024-07-14 09:43:57.371994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:12.962 [2024-07-14 09:43:57.386541] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:12.962 [2024-07-14 09:43:57.386905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:12.962 [2024-07-14 09:43:57.386932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:13.221 [2024-07-14 09:43:57.402111] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:13.221 [2024-07-14 09:43:57.402427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.221 [2024-07-14 09:43:57.402457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:13.221 [2024-07-14 09:43:57.416266] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:13.221 [2024-07-14 09:43:57.416546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.221 [2024-07-14 09:43:57.416572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:13.221 [2024-07-14 09:43:57.430424] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:13.221 [2024-07-14 09:43:57.430783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.221 [2024-07-14 09:43:57.430809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:13.221 [2024-07-14 09:43:57.444535] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:13.221 [2024-07-14 09:43:57.444887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.221 [2024-07-14 09:43:57.444914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:13.221 [2024-07-14 09:43:57.458595] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:13.221 [2024-07-14 09:43:57.458843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.221 [2024-07-14 09:43:57.458878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:13.221 [2024-07-14 09:43:57.472329] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:13.221 [2024-07-14 09:43:57.472697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.221 [2024-07-14 09:43:57.472727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:13.221 [2024-07-14 09:43:57.486315] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:13.221 [2024-07-14 09:43:57.486603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.221 [2024-07-14 09:43:57.486628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:13.221 [2024-07-14 09:43:57.500205] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:13.221 [2024-07-14 09:43:57.500449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.221 [2024-07-14 09:43:57.500475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:13.221 [2024-07-14 09:43:57.514096] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112a990) with pdu=0x2000190fdeb0 00:34:13.221 [2024-07-14 09:43:57.514382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.221 [2024-07-14 09:43:57.514410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:13.221 00:34:13.221 Latency(us) 00:34:13.221 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:13.221 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:13.221 nvme0n1 : 2.01 18189.29 71.05 0.00 0.00 7019.74 3762.25 16408.27 00:34:13.221 =================================================================================================================== 00:34:13.221 Total : 18189.29 71.05 0.00 0.00 7019.74 3762.25 16408.27 00:34:13.221 0 00:34:13.221 09:43:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:13.221 09:43:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:13.221 09:43:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:13.221 09:43:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:13.221 | .driver_specific 00:34:13.221 | .nvme_error 00:34:13.221 | .status_code 00:34:13.221 | .command_transient_transport_error' 00:34:13.480 09:43:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 143 > 0 )) 00:34:13.480 09:43:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 891306 00:34:13.480 09:43:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 891306 ']' 00:34:13.480 09:43:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 891306 00:34:13.480 09:43:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:34:13.480 09:43:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:13.480 09:43:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 891306 00:34:13.480 09:43:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:13.480 09:43:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:13.480 09:43:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 891306' 00:34:13.480 killing process with pid 891306 00:34:13.480 09:43:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 891306 00:34:13.480 Received shutdown signal, test time was about 2.000000 seconds 00:34:13.480 00:34:13.480 Latency(us) 00:34:13.480 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:13.480 =================================================================================================================== 00:34:13.480 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:13.480 09:43:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 891306 00:34:13.738 09:43:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:34:13.738 09:43:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:13.738 09:43:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:34:13.738 09:43:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:34:13.738 09:43:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:34:13.738 09:43:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=891718 00:34:13.738 09:43:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:34:13.738 09:43:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 891718 /var/tmp/bperf.sock 00:34:13.738 09:43:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 891718 ']' 00:34:13.738 09:43:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:13.738 09:43:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:13.738 09:43:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:13.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:13.738 09:43:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:13.738 09:43:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:13.738 [2024-07-14 09:43:58.097343] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:34:13.738 [2024-07-14 09:43:58.097421] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid891718 ] 00:34:13.738 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:13.738 Zero copy mechanism will not be used. 00:34:13.738 EAL: No free 2048 kB hugepages reported on node 1 00:34:13.738 [2024-07-14 09:43:58.158692] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:13.997 [2024-07-14 09:43:58.246985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:13.997 09:43:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:13.997 09:43:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:34:13.997 09:43:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:13.997 09:43:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:14.255 09:43:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:14.255 09:43:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.255 09:43:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:14.255 09:43:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.255 09:43:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:14.255 09:43:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:14.513 nvme0n1 00:34:14.513 09:43:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:34:14.513 09:43:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.513 09:43:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:14.513 09:43:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.513 09:43:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:14.513 09:43:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:14.771 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:14.771 Zero copy mechanism will not be used. 00:34:14.771 Running I/O for 2 seconds... 00:34:14.771 [2024-07-14 09:43:59.075709] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:14.771 [2024-07-14 09:43:59.076259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.771 [2024-07-14 09:43:59.076312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:14.771 [2024-07-14 09:43:59.096429] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:14.771 [2024-07-14 09:43:59.097113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.771 [2024-07-14 09:43:59.097170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:14.771 [2024-07-14 09:43:59.118301] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:14.771 [2024-07-14 09:43:59.118790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.771 [2024-07-14 09:43:59.118823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:14.771 [2024-07-14 09:43:59.141043] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:14.771 [2024-07-14 09:43:59.141584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.772 [2024-07-14 09:43:59.141618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:14.772 [2024-07-14 09:43:59.161368] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:14.772 [2024-07-14 09:43:59.162042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.772 [2024-07-14 09:43:59.162070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:14.772 [2024-07-14 09:43:59.183178] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:14.772 [2024-07-14 09:43:59.183738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.772 [2024-07-14 09:43:59.183765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:14.772 [2024-07-14 09:43:59.205428] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:14.772 [2024-07-14 09:43:59.205898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.772 [2024-07-14 09:43:59.205940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:15.030 [2024-07-14 09:43:59.227039] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:15.030 [2024-07-14 09:43:59.227479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.030 [2024-07-14 09:43:59.227507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:15.030 [2024-07-14 09:43:59.247537] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:15.030 [2024-07-14 09:43:59.248005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.030 [2024-07-14 09:43:59.248049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:15.030 [2024-07-14 09:43:59.269339] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:15.030 [2024-07-14 09:43:59.269729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.030 [2024-07-14 09:43:59.269755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:15.030 [2024-07-14 09:43:59.289141] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:15.030 [2024-07-14 09:43:59.289771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.030 [2024-07-14 09:43:59.289798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:15.030 [2024-07-14 09:43:59.308226] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:15.030 [2024-07-14 09:43:59.308758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.030 [2024-07-14 09:43:59.308815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:15.030 [2024-07-14 09:43:59.327363] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:15.030 [2024-07-14 09:43:59.327846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.030 [2024-07-14 09:43:59.327900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:15.030 [2024-07-14 09:43:59.349742] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:15.030 [2024-07-14 09:43:59.350276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.030 [2024-07-14 09:43:59.350320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:15.030 [2024-07-14 09:43:59.369485] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:15.030 [2024-07-14 09:43:59.369881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.030 [2024-07-14 09:43:59.369924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:15.030 [2024-07-14 09:43:59.387820] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:15.030 [2024-07-14 09:43:59.388260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.030 [2024-07-14 09:43:59.388287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:15.030 [2024-07-14 09:43:59.408394] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:15.030 [2024-07-14 09:43:59.408748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.030 [2024-07-14 09:43:59.408777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:15.030 [2024-07-14 09:43:59.428753] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:15.030 [2024-07-14 09:43:59.429376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.030 [2024-07-14 09:43:59.429421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:15.031 [2024-07-14 09:43:59.446910] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:15.031 [2024-07-14 09:43:59.447368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.031 [2024-07-14 09:43:59.447411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:15.031 [2024-07-14 09:43:59.467532] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:15.031 [2024-07-14 09:43:59.468079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.031 [2024-07-14 09:43:59.468133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:15.289 [2024-07-14 09:43:59.490174] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:15.289 [2024-07-14 09:43:59.490726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.289 [2024-07-14 09:43:59.490772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:15.289 [2024-07-14 09:43:59.512676] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:15.289 [2024-07-14 09:43:59.513211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.289 [2024-07-14 09:43:59.513262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:15.289 [2024-07-14 09:43:59.534674] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:15.289 [2024-07-14 09:43:59.535239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.289 [2024-07-14 09:43:59.535284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:15.289 [2024-07-14 09:43:59.555228] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:15.289 [2024-07-14 09:43:59.555771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.289 [2024-07-14 09:43:59.555812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:15.289 [2024-07-14 09:43:59.575685] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:15.289 [2024-07-14 09:43:59.576270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.289 [2024-07-14 09:43:59.576314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:15.289 [2024-07-14 09:43:59.594741] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:15.289 [2024-07-14 09:43:59.595206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.289 [2024-07-14 09:43:59.595252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:15.289 [2024-07-14 09:43:59.615464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:15.289 [2024-07-14 09:43:59.616013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.289 [2024-07-14 09:43:59.616042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:15.289 [2024-07-14 09:43:59.639697] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:15.289 [2024-07-14 09:43:59.640289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.289 [2024-07-14 09:43:59.640339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:15.289 [2024-07-14 09:43:59.661608] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:15.289 [2024-07-14 09:43:59.662084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.289 [2024-07-14 09:43:59.662128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:15.289 [2024-07-14 09:43:59.683622] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:15.289 [2024-07-14 09:43:59.684068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.289 [2024-07-14 09:43:59.684110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:15.289 [2024-07-14 09:43:59.705485] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:15.289 [2024-07-14 09:43:59.705964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.289 [2024-07-14 09:43:59.705992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:15.289 [2024-07-14 09:43:59.728312] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:15.289 [2024-07-14 09:43:59.728782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.289 [2024-07-14 09:43:59.728809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:15.547 [2024-07-14 09:43:59.750607] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:15.547 [2024-07-14 09:43:59.751132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.547 [2024-07-14 09:43:59.751175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:15.547 [2024-07-14 09:43:59.772547] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:15.547 [2024-07-14 09:43:59.772985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.547 [2024-07-14 09:43:59.773013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:15.547 [2024-07-14 09:43:59.794942] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:15.547 [2024-07-14 09:43:59.795686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.547 [2024-07-14 09:43:59.795713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:15.547 [2024-07-14 09:43:59.817263] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:15.547 [2024-07-14 09:43:59.817810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.547 [2024-07-14 09:43:59.817836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:15.547 [2024-07-14 09:43:59.838895] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:15.547 [2024-07-14 09:43:59.839476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.547 [2024-07-14 09:43:59.839502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:15.547 [2024-07-14 09:43:59.860454] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:15.547 [2024-07-14 09:43:59.860974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.547 [2024-07-14 09:43:59.861003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:15.547 [2024-07-14 09:43:59.882189] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:15.547 [2024-07-14 09:43:59.882730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.547 [2024-07-14 09:43:59.882756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:15.547 [2024-07-14 09:43:59.903458] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:15.547 [2024-07-14 09:43:59.904010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.547 [2024-07-14 09:43:59.904038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:15.547 [2024-07-14 09:43:59.925089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:15.547 [2024-07-14 09:43:59.925507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.547 [2024-07-14 09:43:59.925552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:15.547 [2024-07-14 09:43:59.947623] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:15.547 [2024-07-14 09:43:59.948250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.547 [2024-07-14 09:43:59.948294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:15.547 [2024-07-14 09:43:59.970744] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:15.547 [2024-07-14 09:43:59.971369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.547 [2024-07-14 09:43:59.971413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:15.547 [2024-07-14 09:43:59.991627] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:15.547 [2024-07-14 09:43:59.992091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.547 [2024-07-14 09:43:59.992119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:15.805 [2024-07-14 09:44:00.012958] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:15.805 [2024-07-14 09:44:00.013401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.805 [2024-07-14 09:44:00.013452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:15.805 [2024-07-14 09:44:00.030096] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:15.805 [2024-07-14 09:44:00.030558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.805 [2024-07-14 09:44:00.030608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:15.805 [2024-07-14 09:44:00.047003] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:15.805 [2024-07-14 09:44:00.047545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.805 [2024-07-14 09:44:00.047577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:15.805 [2024-07-14 09:44:00.066606] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:15.805 [2024-07-14 09:44:00.067308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.805 [2024-07-14 09:44:00.067339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:15.805 [2024-07-14 09:44:00.088633] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:15.805 [2024-07-14 09:44:00.089083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.805 [2024-07-14 09:44:00.089112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:15.806 [2024-07-14 09:44:00.109343] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:15.806 [2024-07-14 09:44:00.109964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.806 [2024-07-14 09:44:00.109994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:15.806 [2024-07-14 09:44:00.132209] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:15.806 [2024-07-14 09:44:00.132711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.806 [2024-07-14 09:44:00.132740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:15.806 [2024-07-14 09:44:00.154662] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:15.806 [2024-07-14 09:44:00.155121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.806 [2024-07-14 09:44:00.155151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:15.806 [2024-07-14 09:44:00.176700] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:15.806 [2024-07-14 09:44:00.177165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.806 [2024-07-14 09:44:00.177192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:15.806 [2024-07-14 09:44:00.197623] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:15.806 [2024-07-14 09:44:00.198042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.806 [2024-07-14 09:44:00.198085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:15.806 [2024-07-14 09:44:00.218090] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:15.806 [2024-07-14 09:44:00.218549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.806 [2024-07-14 09:44:00.218593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:15.806 [2024-07-14 09:44:00.239003] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:15.806 [2024-07-14 09:44:00.239542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:15.806 [2024-07-14 09:44:00.239584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.064 [2024-07-14 09:44:00.258720] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:16.064 [2024-07-14 09:44:00.259111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.064 [2024-07-14 09:44:00.259140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.064 [2024-07-14 09:44:00.279847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:16.064 [2024-07-14 09:44:00.280388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.064 [2024-07-14 09:44:00.280414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.064 [2024-07-14 09:44:00.302500] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:16.064 [2024-07-14 09:44:00.302900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.064 [2024-07-14 09:44:00.302926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.064 [2024-07-14 09:44:00.322142] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:16.064 [2024-07-14 09:44:00.322538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.064 [2024-07-14 09:44:00.322580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.064 [2024-07-14 09:44:00.344671] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:16.064 [2024-07-14 09:44:00.345307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.064 [2024-07-14 09:44:00.345335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.064 [2024-07-14 09:44:00.367528] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:16.064 [2024-07-14 09:44:00.368102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.064 [2024-07-14 09:44:00.368129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.064 [2024-07-14 09:44:00.388513] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:16.064 [2024-07-14 09:44:00.388962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.064 [2024-07-14 09:44:00.388991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.064 [2024-07-14 09:44:00.411377] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:16.064 [2024-07-14 09:44:00.411926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.064 [2024-07-14 09:44:00.411955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.064 [2024-07-14 09:44:00.432596] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:16.064 [2024-07-14 09:44:00.433129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.064 [2024-07-14 09:44:00.433172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.064 [2024-07-14 09:44:00.453710] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:16.064 [2024-07-14 09:44:00.454196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.064 [2024-07-14 09:44:00.454242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.064 [2024-07-14 09:44:00.476683] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:16.064 [2024-07-14 09:44:00.477285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.064 [2024-07-14 09:44:00.477319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.064 [2024-07-14 09:44:00.498414] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:16.064 [2024-07-14 09:44:00.499011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.064 [2024-07-14 09:44:00.499040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.323 [2024-07-14 09:44:00.520338] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:16.323 [2024-07-14 09:44:00.520753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.323 [2024-07-14 09:44:00.520797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.323 [2024-07-14 09:44:00.540811] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:16.323 [2024-07-14 09:44:00.541431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.323 [2024-07-14 09:44:00.541475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.323 [2024-07-14 09:44:00.563128] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:16.323 [2024-07-14 09:44:00.563602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.323 [2024-07-14 09:44:00.563654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.323 [2024-07-14 09:44:00.583375] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:16.323 [2024-07-14 09:44:00.584074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.323 [2024-07-14 09:44:00.584117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.323 [2024-07-14 09:44:00.604065] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:16.323 [2024-07-14 09:44:00.604586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.323 [2024-07-14 09:44:00.604627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.323 [2024-07-14 09:44:00.625950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:16.323 [2024-07-14 09:44:00.626462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.323 [2024-07-14 09:44:00.626508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.323 [2024-07-14 09:44:00.648007] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:16.323 [2024-07-14 09:44:00.648434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.323 [2024-07-14 09:44:00.648463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.323 [2024-07-14 09:44:00.669983] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:16.323 [2024-07-14 09:44:00.670499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.323 [2024-07-14 09:44:00.670542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.323 [2024-07-14 09:44:00.690213] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:16.323 [2024-07-14 09:44:00.690844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.323 [2024-07-14 09:44:00.690894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.323 [2024-07-14 09:44:00.713231] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:16.323 [2024-07-14 09:44:00.713660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.323 [2024-07-14 09:44:00.713689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.323 [2024-07-14 09:44:00.734619] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:16.323 [2024-07-14 09:44:00.735136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.323 [2024-07-14 09:44:00.735166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.323 [2024-07-14 09:44:00.754176] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:16.323 [2024-07-14 09:44:00.754560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.323 [2024-07-14 09:44:00.754604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.323 [2024-07-14 09:44:00.772437] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:16.323 [2024-07-14 09:44:00.772830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.323 [2024-07-14 09:44:00.772858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.581 [2024-07-14 09:44:00.790394] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:16.581 [2024-07-14 09:44:00.790980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.581 [2024-07-14 09:44:00.791024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.581 [2024-07-14 09:44:00.811653] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:16.581 [2024-07-14 09:44:00.812327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.581 [2024-07-14 09:44:00.812371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.581 [2024-07-14 09:44:00.831418] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:16.581 [2024-07-14 09:44:00.831822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.581 [2024-07-14 09:44:00.831872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.581 [2024-07-14 09:44:00.851210] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:16.581 [2024-07-14 09:44:00.851624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.581 [2024-07-14 09:44:00.851652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.581 [2024-07-14 09:44:00.870361] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:16.581 [2024-07-14 09:44:00.870853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.581 [2024-07-14 09:44:00.870917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.581 [2024-07-14 09:44:00.891303] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:16.581 [2024-07-14 09:44:00.891748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.582 [2024-07-14 09:44:00.891777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.582 [2024-07-14 09:44:00.911490] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:16.582 [2024-07-14 09:44:00.911916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.582 [2024-07-14 09:44:00.911945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.582 [2024-07-14 09:44:00.930988] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:16.582 [2024-07-14 09:44:00.931379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.582 [2024-07-14 09:44:00.931407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.582 [2024-07-14 09:44:00.952065] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:16.582 [2024-07-14 09:44:00.952720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.582 [2024-07-14 09:44:00.952747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.582 [2024-07-14 09:44:00.974749] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:16.582 [2024-07-14 09:44:00.975132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.582 [2024-07-14 09:44:00.975179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.582 [2024-07-14 09:44:00.995583] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:16.582 [2024-07-14 09:44:00.996066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.582 [2024-07-14 09:44:00.996095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:16.582 [2024-07-14 09:44:01.019355] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:16.582 [2024-07-14 09:44:01.020083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.582 [2024-07-14 09:44:01.020125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.839 [2024-07-14 09:44:01.038439] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:16.839 [2024-07-14 09:44:01.038998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.839 [2024-07-14 09:44:01.039041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:16.839 [2024-07-14 09:44:01.059938] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112acd0) with pdu=0x2000190fef90 00:34:16.839 [2024-07-14 09:44:01.060509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.839 [2024-07-14 09:44:01.060537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:16.839 00:34:16.840 Latency(us) 00:34:16.840 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:16.840 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:34:16.840 nvme0n1 : 2.01 1469.94 183.74 0.00 0.00 10852.61 5631.24 23495.87 00:34:16.840 =================================================================================================================== 00:34:16.840 Total : 1469.94 183.74 0.00 0.00 10852.61 5631.24 23495.87 00:34:16.840 0 00:34:16.840 09:44:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:16.840 09:44:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:16.840 09:44:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:16.840 09:44:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:16.840 | .driver_specific 00:34:16.840 | .nvme_error 00:34:16.840 | .status_code 00:34:16.840 | .command_transient_transport_error' 00:34:17.097 09:44:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 95 > 0 )) 00:34:17.097 09:44:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 891718 00:34:17.097 09:44:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 891718 ']' 00:34:17.097 09:44:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 891718 00:34:17.097 09:44:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:34:17.098 09:44:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:17.098 09:44:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 891718 00:34:17.098 09:44:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:17.098 09:44:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:17.098 09:44:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 891718' 00:34:17.098 killing process with pid 891718 00:34:17.098 09:44:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 891718 00:34:17.098 Received shutdown signal, test time was about 2.000000 seconds 00:34:17.098 00:34:17.098 Latency(us) 00:34:17.098 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:17.098 =================================================================================================================== 00:34:17.098 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:17.098 09:44:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 891718 00:34:17.356 09:44:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 890354 00:34:17.356 09:44:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 890354 ']' 00:34:17.356 09:44:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 890354 00:34:17.356 09:44:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:34:17.356 09:44:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:17.356 09:44:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 890354 00:34:17.356 09:44:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:17.356 09:44:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:17.356 09:44:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 890354' 00:34:17.356 killing process with pid 890354 00:34:17.356 09:44:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 890354 00:34:17.356 09:44:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 890354 00:34:17.615 00:34:17.615 real 0m15.239s 00:34:17.615 user 0m30.668s 00:34:17.615 sys 0m3.795s 00:34:17.615 09:44:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:17.615 09:44:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:17.615 ************************************ 00:34:17.615 END TEST nvmf_digest_error 00:34:17.615 ************************************ 00:34:17.615 09:44:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:34:17.615 09:44:01 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:34:17.615 09:44:01 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:34:17.615 09:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:17.615 09:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:34:17.615 09:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:17.615 09:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:34:17.615 09:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:17.615 09:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:17.615 rmmod nvme_tcp 00:34:17.615 rmmod nvme_fabrics 00:34:17.615 rmmod nvme_keyring 00:34:17.615 09:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:17.615 09:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:34:17.615 09:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:34:17.615 09:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 890354 ']' 00:34:17.615 09:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 890354 00:34:17.615 09:44:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 890354 ']' 00:34:17.615 09:44:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 890354 00:34:17.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (890354) - No such process 00:34:17.615 09:44:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 890354 is not found' 00:34:17.615 Process with pid 890354 is not found 00:34:17.615 09:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:17.615 09:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:17.615 09:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:17.615 09:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:17.615 09:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:17.615 09:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:17.615 09:44:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:17.615 09:44:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:20.143 09:44:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:20.143 00:34:20.143 real 0m34.697s 00:34:20.143 user 1m1.910s 00:34:20.143 sys 0m9.129s 00:34:20.143 09:44:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:20.143 09:44:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:20.143 ************************************ 00:34:20.143 END TEST nvmf_digest 00:34:20.143 ************************************ 00:34:20.143 09:44:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:20.143 09:44:04 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:34:20.143 09:44:04 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:34:20.143 09:44:04 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:34:20.143 09:44:04 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:34:20.143 09:44:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:20.143 09:44:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:20.143 09:44:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:20.143 ************************************ 00:34:20.143 START TEST nvmf_bdevperf 00:34:20.143 ************************************ 00:34:20.143 09:44:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:34:20.143 * Looking for test storage... 00:34:20.143 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:20.143 09:44:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:20.143 09:44:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:34:20.144 09:44:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:20.144 09:44:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:20.144 09:44:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:20.144 09:44:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:20.144 09:44:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:20.144 09:44:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:20.144 09:44:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:20.144 09:44:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:20.144 09:44:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:20.144 09:44:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:20.144 09:44:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:20.144 09:44:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:20.144 09:44:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:20.144 09:44:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:20.144 09:44:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:20.144 09:44:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:20.144 09:44:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:20.144 09:44:04 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:20.144 09:44:04 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:20.144 09:44:04 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:20.144 09:44:04 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.144 09:44:04 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.144 09:44:04 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.144 09:44:04 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:34:20.144 09:44:04 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.144 09:44:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:34:20.144 09:44:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:20.144 09:44:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:20.144 09:44:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:20.144 09:44:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:20.144 09:44:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:20.144 09:44:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:20.144 09:44:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:20.144 09:44:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:20.144 09:44:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:20.144 09:44:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:20.144 09:44:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:34:20.144 09:44:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:20.144 09:44:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:20.144 09:44:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:20.144 09:44:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:20.144 09:44:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:20.144 09:44:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:20.144 09:44:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:20.144 09:44:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:20.144 09:44:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:20.144 09:44:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:20.144 09:44:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:34:20.144 09:44:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:22.048 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:22.048 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:22.048 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:22.048 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:22.049 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:22.049 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:22.049 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:34:22.049 00:34:22.049 --- 10.0.0.2 ping statistics --- 00:34:22.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:22.049 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:22.049 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:22.049 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:34:22.049 00:34:22.049 --- 10.0.0.1 ping statistics --- 00:34:22.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:22.049 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=894064 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 894064 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 894064 ']' 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:22.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:22.049 09:44:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:22.049 [2024-07-14 09:44:06.297367] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:34:22.049 [2024-07-14 09:44:06.297439] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:22.049 EAL: No free 2048 kB hugepages reported on node 1 00:34:22.049 [2024-07-14 09:44:06.366371] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:22.049 [2024-07-14 09:44:06.461187] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:22.049 [2024-07-14 09:44:06.461260] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:22.049 [2024-07-14 09:44:06.461276] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:22.049 [2024-07-14 09:44:06.461289] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:22.049 [2024-07-14 09:44:06.461301] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:22.049 [2024-07-14 09:44:06.461661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:22.049 [2024-07-14 09:44:06.461713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:34:22.049 [2024-07-14 09:44:06.461716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:22.308 09:44:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:22.308 09:44:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:34:22.308 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:22.308 09:44:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:22.308 09:44:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:22.308 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:22.308 09:44:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:22.308 09:44:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.308 09:44:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:22.308 [2024-07-14 09:44:06.590650] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:22.308 09:44:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.308 09:44:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:22.308 09:44:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.308 09:44:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:22.308 Malloc0 00:34:22.308 09:44:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.308 09:44:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:22.308 09:44:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.308 09:44:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:22.308 09:44:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.308 09:44:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:22.308 09:44:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.308 09:44:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:22.308 09:44:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.308 09:44:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:22.308 09:44:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.308 09:44:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:22.308 [2024-07-14 09:44:06.653652] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:22.308 09:44:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.308 09:44:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:34:22.308 09:44:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:34:22.308 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:34:22.308 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:34:22.308 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:22.308 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:22.308 { 00:34:22.308 "params": { 00:34:22.308 "name": "Nvme$subsystem", 00:34:22.308 "trtype": "$TEST_TRANSPORT", 00:34:22.308 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:22.308 "adrfam": "ipv4", 00:34:22.308 "trsvcid": "$NVMF_PORT", 00:34:22.308 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:22.308 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:22.308 "hdgst": ${hdgst:-false}, 00:34:22.308 "ddgst": ${ddgst:-false} 00:34:22.308 }, 00:34:22.308 "method": "bdev_nvme_attach_controller" 00:34:22.308 } 00:34:22.308 EOF 00:34:22.308 )") 00:34:22.308 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:34:22.308 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:34:22.308 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:34:22.308 09:44:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:22.308 "params": { 00:34:22.308 "name": "Nvme1", 00:34:22.308 "trtype": "tcp", 00:34:22.308 "traddr": "10.0.0.2", 00:34:22.308 "adrfam": "ipv4", 00:34:22.308 "trsvcid": "4420", 00:34:22.308 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:22.308 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:22.308 "hdgst": false, 00:34:22.308 "ddgst": false 00:34:22.308 }, 00:34:22.308 "method": "bdev_nvme_attach_controller" 00:34:22.308 }' 00:34:22.308 [2024-07-14 09:44:06.701998] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:34:22.308 [2024-07-14 09:44:06.702071] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid894098 ] 00:34:22.308 EAL: No free 2048 kB hugepages reported on node 1 00:34:22.566 [2024-07-14 09:44:06.761813] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:22.566 [2024-07-14 09:44:06.850568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:22.823 Running I/O for 1 seconds... 00:34:23.758 00:34:23.758 Latency(us) 00:34:23.758 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:23.758 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:23.758 Verification LBA range: start 0x0 length 0x4000 00:34:23.758 Nvme1n1 : 1.01 8717.78 34.05 0.00 0.00 14591.47 1577.72 15243.19 00:34:23.758 =================================================================================================================== 00:34:23.758 Total : 8717.78 34.05 0.00 0.00 14591.47 1577.72 15243.19 00:34:24.017 09:44:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=894349 00:34:24.017 09:44:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:34:24.017 09:44:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:34:24.017 09:44:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:34:24.017 09:44:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:34:24.017 09:44:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:34:24.017 09:44:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:24.017 09:44:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:24.017 { 00:34:24.017 "params": { 00:34:24.017 "name": "Nvme$subsystem", 00:34:24.017 "trtype": "$TEST_TRANSPORT", 00:34:24.017 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:24.017 "adrfam": "ipv4", 00:34:24.017 "trsvcid": "$NVMF_PORT", 00:34:24.017 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:24.017 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:24.017 "hdgst": ${hdgst:-false}, 00:34:24.017 "ddgst": ${ddgst:-false} 00:34:24.017 }, 00:34:24.017 "method": "bdev_nvme_attach_controller" 00:34:24.017 } 00:34:24.017 EOF 00:34:24.017 )") 00:34:24.017 09:44:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:34:24.017 09:44:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:34:24.017 09:44:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:34:24.017 09:44:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:24.017 "params": { 00:34:24.017 "name": "Nvme1", 00:34:24.017 "trtype": "tcp", 00:34:24.017 "traddr": "10.0.0.2", 00:34:24.017 "adrfam": "ipv4", 00:34:24.017 "trsvcid": "4420", 00:34:24.017 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:24.017 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:24.017 "hdgst": false, 00:34:24.017 "ddgst": false 00:34:24.017 }, 00:34:24.017 "method": "bdev_nvme_attach_controller" 00:34:24.017 }' 00:34:24.017 [2024-07-14 09:44:08.401158] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:34:24.017 [2024-07-14 09:44:08.401250] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid894349 ] 00:34:24.017 EAL: No free 2048 kB hugepages reported on node 1 00:34:24.017 [2024-07-14 09:44:08.460431] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:24.276 [2024-07-14 09:44:08.545288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:24.535 Running I/O for 15 seconds... 00:34:27.067 09:44:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 894064 00:34:27.067 09:44:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:34:27.067 [2024-07-14 09:44:11.374724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:51136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.067 [2024-07-14 09:44:11.374776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.067 [2024-07-14 09:44:11.374812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:51144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.067 [2024-07-14 09:44:11.374830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.067 [2024-07-14 09:44:11.374849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:51152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.067 [2024-07-14 09:44:11.374874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.067 [2024-07-14 09:44:11.374895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:51160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.067 [2024-07-14 09:44:11.374925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.067 [2024-07-14 09:44:11.374941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:51168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.067 [2024-07-14 09:44:11.374956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.067 [2024-07-14 09:44:11.374972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:51176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.067 [2024-07-14 09:44:11.374986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.067 [2024-07-14 09:44:11.375002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:51184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.067 [2024-07-14 09:44:11.375017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.067 [2024-07-14 09:44:11.375041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:51192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.067 [2024-07-14 09:44:11.375057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.067 [2024-07-14 09:44:11.375073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:51200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.067 [2024-07-14 09:44:11.375088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.067 [2024-07-14 09:44:11.375104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:51208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.067 [2024-07-14 09:44:11.375119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.067 [2024-07-14 09:44:11.375135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:51216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.067 [2024-07-14 09:44:11.375165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.067 [2024-07-14 09:44:11.375180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:51224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.067 [2024-07-14 09:44:11.375193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.067 [2024-07-14 09:44:11.375208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:51232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.067 [2024-07-14 09:44:11.375236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.068 [2024-07-14 09:44:11.375251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:51240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.068 [2024-07-14 09:44:11.375265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.068 [2024-07-14 09:44:11.375296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:51248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.068 [2024-07-14 09:44:11.375312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.068 [2024-07-14 09:44:11.375329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.068 [2024-07-14 09:44:11.375344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.068 [2024-07-14 09:44:11.375361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:51264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.068 [2024-07-14 09:44:11.375377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.068 [2024-07-14 09:44:11.375394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:51272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.068 [2024-07-14 09:44:11.375409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.068 [2024-07-14 09:44:11.375426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:51280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.068 [2024-07-14 09:44:11.375441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.068 [2024-07-14 09:44:11.375460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:51288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.068 [2024-07-14 09:44:11.375480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.068 [2024-07-14 09:44:11.375497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:51296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.068 [2024-07-14 09:44:11.375513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.068 [2024-07-14 09:44:11.375530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:51304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.068 [2024-07-14 09:44:11.375545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.068 [2024-07-14 09:44:11.375562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.068 [2024-07-14 09:44:11.375579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.068 [2024-07-14 09:44:11.375596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:51320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.068 [2024-07-14 09:44:11.375612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.068 [2024-07-14 09:44:11.375629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:51328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.068 [2024-07-14 09:44:11.375644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.068 [2024-07-14 09:44:11.375662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:51336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.068 [2024-07-14 09:44:11.375677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.068 [2024-07-14 09:44:11.375695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:51344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.068 [2024-07-14 09:44:11.375710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.068 [2024-07-14 09:44:11.375727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:51352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.068 [2024-07-14 09:44:11.375742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.068 [2024-07-14 09:44:11.375759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:51360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.068 [2024-07-14 09:44:11.375774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.068 [2024-07-14 09:44:11.375791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:51368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.068 [2024-07-14 09:44:11.375806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.068 [2024-07-14 09:44:11.375823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:51376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.068 [2024-07-14 09:44:11.375838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.068 [2024-07-14 09:44:11.375855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:51384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.068 [2024-07-14 09:44:11.375878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.068 [2024-07-14 09:44:11.375901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:51392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.068 [2024-07-14 09:44:11.375932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.068 [2024-07-14 09:44:11.375949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:51400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.068 [2024-07-14 09:44:11.375964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.068 [2024-07-14 09:44:11.375979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:51408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.068 [2024-07-14 09:44:11.375993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.068 [2024-07-14 09:44:11.376008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:51416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.068 [2024-07-14 09:44:11.376023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.068 [2024-07-14 09:44:11.376038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:51424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.068 [2024-07-14 09:44:11.376052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.068 [2024-07-14 09:44:11.376067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:51432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.068 [2024-07-14 09:44:11.376081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.068 [2024-07-14 09:44:11.376097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:51440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.068 [2024-07-14 09:44:11.376110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.068 [2024-07-14 09:44:11.376126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.068 [2024-07-14 09:44:11.376139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.068 [2024-07-14 09:44:11.376173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:51456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.068 [2024-07-14 09:44:11.376186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.068 [2024-07-14 09:44:11.376202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:51464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.068 [2024-07-14 09:44:11.376231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.068 [2024-07-14 09:44:11.376249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:51472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.068 [2024-07-14 09:44:11.376264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.068 [2024-07-14 09:44:11.376280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:51480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.068 [2024-07-14 09:44:11.376295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.068 [2024-07-14 09:44:11.376313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:51488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.068 [2024-07-14 09:44:11.376332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.068 [2024-07-14 09:44:11.376350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:51496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.068 [2024-07-14 09:44:11.376365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.068 [2024-07-14 09:44:11.376383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:51504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.068 [2024-07-14 09:44:11.376398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.068 [2024-07-14 09:44:11.376415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:51512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.068 [2024-07-14 09:44:11.376430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.068 [2024-07-14 09:44:11.376447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:51520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.068 [2024-07-14 09:44:11.376462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.068 [2024-07-14 09:44:11.376479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:51528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.068 [2024-07-14 09:44:11.376494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.068 [2024-07-14 09:44:11.376511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:51536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.068 [2024-07-14 09:44:11.376526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.068 [2024-07-14 09:44:11.376543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:51544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.068 [2024-07-14 09:44:11.376559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.068 [2024-07-14 09:44:11.376576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:51552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.068 [2024-07-14 09:44:11.376591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.068 [2024-07-14 09:44:11.376608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.068 [2024-07-14 09:44:11.376624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.068 [2024-07-14 09:44:11.376641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:51568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.068 [2024-07-14 09:44:11.376657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.069 [2024-07-14 09:44:11.376673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:51576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.069 [2024-07-14 09:44:11.376689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.069 [2024-07-14 09:44:11.376705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:51584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.069 [2024-07-14 09:44:11.376721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.069 [2024-07-14 09:44:11.376738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:51592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.069 [2024-07-14 09:44:11.376757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.069 [2024-07-14 09:44:11.376774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:51600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.069 [2024-07-14 09:44:11.376790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.069 [2024-07-14 09:44:11.376806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:51608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.069 [2024-07-14 09:44:11.376822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.069 [2024-07-14 09:44:11.376839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:51616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.069 [2024-07-14 09:44:11.376855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.069 [2024-07-14 09:44:11.376880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:51624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.069 [2024-07-14 09:44:11.376897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.069 [2024-07-14 09:44:11.376914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:51632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.069 [2024-07-14 09:44:11.376945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.069 [2024-07-14 09:44:11.376961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:51640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.069 [2024-07-14 09:44:11.376975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.069 [2024-07-14 09:44:11.376991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:51648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.069 [2024-07-14 09:44:11.377005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.069 [2024-07-14 09:44:11.377020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:51656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.069 [2024-07-14 09:44:11.377034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.069 [2024-07-14 09:44:11.377049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:51664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.069 [2024-07-14 09:44:11.377063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.069 [2024-07-14 09:44:11.377079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:51672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.069 [2024-07-14 09:44:11.377093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.069 [2024-07-14 09:44:11.377109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:51680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.069 [2024-07-14 09:44:11.377122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.069 [2024-07-14 09:44:11.377137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:51688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.069 [2024-07-14 09:44:11.377166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.069 [2024-07-14 09:44:11.377185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:51696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.069 [2024-07-14 09:44:11.377198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.069 [2024-07-14 09:44:11.377229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:51704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.069 [2024-07-14 09:44:11.377245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.069 [2024-07-14 09:44:11.377262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:51712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.069 [2024-07-14 09:44:11.377277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.069 [2024-07-14 09:44:11.377294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:51720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.069 [2024-07-14 09:44:11.377309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.069 [2024-07-14 09:44:11.377326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:51728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.069 [2024-07-14 09:44:11.377341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.069 [2024-07-14 09:44:11.377358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:51736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.069 [2024-07-14 09:44:11.377373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.069 [2024-07-14 09:44:11.377390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:51744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.069 [2024-07-14 09:44:11.377406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.069 [2024-07-14 09:44:11.377422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:51752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.069 [2024-07-14 09:44:11.377438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.069 [2024-07-14 09:44:11.377455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:52144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:27.069 [2024-07-14 09:44:11.377471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.069 [2024-07-14 09:44:11.377488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:52152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:27.069 [2024-07-14 09:44:11.377503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.069 [2024-07-14 09:44:11.377520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:51760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.069 [2024-07-14 09:44:11.377536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.069 [2024-07-14 09:44:11.377553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.069 [2024-07-14 09:44:11.377569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.069 [2024-07-14 09:44:11.377585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:51776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.069 [2024-07-14 09:44:11.377605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.069 [2024-07-14 09:44:11.377629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:51784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.069 [2024-07-14 09:44:11.377645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.069 [2024-07-14 09:44:11.377662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:51792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.069 [2024-07-14 09:44:11.377677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.069 [2024-07-14 09:44:11.377694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:51800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.069 [2024-07-14 09:44:11.377710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.069 [2024-07-14 09:44:11.377727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:51808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.069 [2024-07-14 09:44:11.377742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.069 [2024-07-14 09:44:11.377758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:51816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.069 [2024-07-14 09:44:11.377774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.069 [2024-07-14 09:44:11.377790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:51824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.069 [2024-07-14 09:44:11.377805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.069 [2024-07-14 09:44:11.377824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:51832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.069 [2024-07-14 09:44:11.377840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.069 [2024-07-14 09:44:11.377857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:51840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.069 [2024-07-14 09:44:11.377882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.069 [2024-07-14 09:44:11.377901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:51848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.069 [2024-07-14 09:44:11.377931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.069 [2024-07-14 09:44:11.377946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:51856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.069 [2024-07-14 09:44:11.377960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.069 [2024-07-14 09:44:11.377976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:51864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.069 [2024-07-14 09:44:11.377990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.069 [2024-07-14 09:44:11.378005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:51872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.069 [2024-07-14 09:44:11.378019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.069 [2024-07-14 09:44:11.378038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:51880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.069 [2024-07-14 09:44:11.378052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.069 [2024-07-14 09:44:11.378068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:51888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.069 [2024-07-14 09:44:11.378082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.069 [2024-07-14 09:44:11.378097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:51896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.069 [2024-07-14 09:44:11.378111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.070 [2024-07-14 09:44:11.378126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:51904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.070 [2024-07-14 09:44:11.378141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.070 [2024-07-14 09:44:11.378161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:51912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.070 [2024-07-14 09:44:11.378188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.070 [2024-07-14 09:44:11.378203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.070 [2024-07-14 09:44:11.378215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.070 [2024-07-14 09:44:11.378230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:51928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.070 [2024-07-14 09:44:11.378261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.070 [2024-07-14 09:44:11.378279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:51936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.070 [2024-07-14 09:44:11.378295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.070 [2024-07-14 09:44:11.378312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:51944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.070 [2024-07-14 09:44:11.378328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.070 [2024-07-14 09:44:11.378346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:51952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.070 [2024-07-14 09:44:11.378361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.070 [2024-07-14 09:44:11.378378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:51960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.070 [2024-07-14 09:44:11.378394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.070 [2024-07-14 09:44:11.378411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:51968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.070 [2024-07-14 09:44:11.378427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.070 [2024-07-14 09:44:11.378445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:51976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.070 [2024-07-14 09:44:11.378464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.070 [2024-07-14 09:44:11.378482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:51984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.070 [2024-07-14 09:44:11.378498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.070 [2024-07-14 09:44:11.378516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:51992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.070 [2024-07-14 09:44:11.378532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.070 [2024-07-14 09:44:11.378549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:52000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.070 [2024-07-14 09:44:11.378564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.070 [2024-07-14 09:44:11.378582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:52008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.070 [2024-07-14 09:44:11.378597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.070 [2024-07-14 09:44:11.378614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:52016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.070 [2024-07-14 09:44:11.378630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.070 [2024-07-14 09:44:11.378647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:52024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.070 [2024-07-14 09:44:11.378662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.070 [2024-07-14 09:44:11.378679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:52032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.070 [2024-07-14 09:44:11.378695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.070 [2024-07-14 09:44:11.378713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:52040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.070 [2024-07-14 09:44:11.378728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.070 [2024-07-14 09:44:11.378745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:52048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.070 [2024-07-14 09:44:11.378760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.070 [2024-07-14 09:44:11.378778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:52056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.070 [2024-07-14 09:44:11.378794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.070 [2024-07-14 09:44:11.378810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:52064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.070 [2024-07-14 09:44:11.378826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.070 [2024-07-14 09:44:11.378843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:52072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.070 [2024-07-14 09:44:11.378859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.070 [2024-07-14 09:44:11.378886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:52080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.070 [2024-07-14 09:44:11.378907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.070 [2024-07-14 09:44:11.378939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:52088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.070 [2024-07-14 09:44:11.378954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.070 [2024-07-14 09:44:11.378969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:52096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.070 [2024-07-14 09:44:11.378983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.070 [2024-07-14 09:44:11.378999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:52104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.070 [2024-07-14 09:44:11.379014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.070 [2024-07-14 09:44:11.379029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:52112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.070 [2024-07-14 09:44:11.379043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.070 [2024-07-14 09:44:11.379059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:52120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.070 [2024-07-14 09:44:11.379073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.070 [2024-07-14 09:44:11.379088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:52128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:27.070 [2024-07-14 09:44:11.379102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.070 [2024-07-14 09:44:11.379117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f11f0 is same with the state(5) to be set 00:34:27.070 [2024-07-14 09:44:11.379133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:27.070 [2024-07-14 09:44:11.379144] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:27.070 [2024-07-14 09:44:11.379173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52136 len:8 PRP1 0x0 PRP2 0x0 00:34:27.070 [2024-07-14 09:44:11.379185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.070 [2024-07-14 09:44:11.379260] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6f11f0 was disconnected and freed. reset controller. 00:34:27.070 [2024-07-14 09:44:11.383123] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.070 [2024-07-14 09:44:11.383205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.070 [2024-07-14 09:44:11.383966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.070 [2024-07-14 09:44:11.383997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.070 [2024-07-14 09:44:11.384014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.070 [2024-07-14 09:44:11.384256] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.070 [2024-07-14 09:44:11.384507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.070 [2024-07-14 09:44:11.384531] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.070 [2024-07-14 09:44:11.384554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.070 [2024-07-14 09:44:11.388132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.070 [2024-07-14 09:44:11.397405] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.070 [2024-07-14 09:44:11.397892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.070 [2024-07-14 09:44:11.397921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.070 [2024-07-14 09:44:11.397937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.070 [2024-07-14 09:44:11.398181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.070 [2024-07-14 09:44:11.398425] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.070 [2024-07-14 09:44:11.398448] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.070 [2024-07-14 09:44:11.398462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.070 [2024-07-14 09:44:11.401990] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.070 [2024-07-14 09:44:11.411452] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.070 [2024-07-14 09:44:11.411920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.070 [2024-07-14 09:44:11.411952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.070 [2024-07-14 09:44:11.411970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.070 [2024-07-14 09:44:11.412209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.071 [2024-07-14 09:44:11.412452] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.071 [2024-07-14 09:44:11.412475] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.071 [2024-07-14 09:44:11.412490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.071 [2024-07-14 09:44:11.416080] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.071 [2024-07-14 09:44:11.425384] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.071 [2024-07-14 09:44:11.425870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.071 [2024-07-14 09:44:11.425902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.071 [2024-07-14 09:44:11.425919] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.071 [2024-07-14 09:44:11.426158] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.071 [2024-07-14 09:44:11.426401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.071 [2024-07-14 09:44:11.426424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.071 [2024-07-14 09:44:11.426438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.071 [2024-07-14 09:44:11.430029] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.071 [2024-07-14 09:44:11.439336] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.071 [2024-07-14 09:44:11.439826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.071 [2024-07-14 09:44:11.439856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.071 [2024-07-14 09:44:11.439883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.071 [2024-07-14 09:44:11.440124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.071 [2024-07-14 09:44:11.440366] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.071 [2024-07-14 09:44:11.440389] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.071 [2024-07-14 09:44:11.440404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.071 [2024-07-14 09:44:11.443988] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.071 [2024-07-14 09:44:11.453289] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.071 [2024-07-14 09:44:11.453762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.071 [2024-07-14 09:44:11.453792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.071 [2024-07-14 09:44:11.453810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.071 [2024-07-14 09:44:11.454057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.071 [2024-07-14 09:44:11.454299] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.071 [2024-07-14 09:44:11.454322] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.071 [2024-07-14 09:44:11.454337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.071 [2024-07-14 09:44:11.457922] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.071 [2024-07-14 09:44:11.467228] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.071 [2024-07-14 09:44:11.467720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.071 [2024-07-14 09:44:11.467761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.071 [2024-07-14 09:44:11.467777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.071 [2024-07-14 09:44:11.468045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.071 [2024-07-14 09:44:11.468289] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.071 [2024-07-14 09:44:11.468312] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.071 [2024-07-14 09:44:11.468327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.071 [2024-07-14 09:44:11.471918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.071 [2024-07-14 09:44:11.481220] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.071 [2024-07-14 09:44:11.481841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.071 [2024-07-14 09:44:11.481902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.071 [2024-07-14 09:44:11.481920] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.071 [2024-07-14 09:44:11.482158] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.071 [2024-07-14 09:44:11.482406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.071 [2024-07-14 09:44:11.482428] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.071 [2024-07-14 09:44:11.482443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.071 [2024-07-14 09:44:11.486063] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.071 [2024-07-14 09:44:11.495150] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.071 [2024-07-14 09:44:11.495727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.071 [2024-07-14 09:44:11.495777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.071 [2024-07-14 09:44:11.495795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.071 [2024-07-14 09:44:11.496043] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.071 [2024-07-14 09:44:11.496286] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.071 [2024-07-14 09:44:11.496308] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.071 [2024-07-14 09:44:11.496323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.071 [2024-07-14 09:44:11.499909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.071 [2024-07-14 09:44:11.508991] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.071 [2024-07-14 09:44:11.509434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.071 [2024-07-14 09:44:11.509465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.071 [2024-07-14 09:44:11.509482] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.071 [2024-07-14 09:44:11.509720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.071 [2024-07-14 09:44:11.509977] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.071 [2024-07-14 09:44:11.510001] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.071 [2024-07-14 09:44:11.510016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.071 [2024-07-14 09:44:11.513643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.330 [2024-07-14 09:44:11.522979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.330 [2024-07-14 09:44:11.523437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.330 [2024-07-14 09:44:11.523468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.330 [2024-07-14 09:44:11.523485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.330 [2024-07-14 09:44:11.523724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.330 [2024-07-14 09:44:11.523980] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.330 [2024-07-14 09:44:11.524004] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.330 [2024-07-14 09:44:11.524019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.330 [2024-07-14 09:44:11.527719] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.330 [2024-07-14 09:44:11.537024] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.330 [2024-07-14 09:44:11.537645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.330 [2024-07-14 09:44:11.537694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.330 [2024-07-14 09:44:11.537712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.330 [2024-07-14 09:44:11.537963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.330 [2024-07-14 09:44:11.538207] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.330 [2024-07-14 09:44:11.538230] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.330 [2024-07-14 09:44:11.538245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.330 [2024-07-14 09:44:11.541823] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.330 [2024-07-14 09:44:11.550915] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.330 [2024-07-14 09:44:11.551389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.330 [2024-07-14 09:44:11.551420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.330 [2024-07-14 09:44:11.551438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.330 [2024-07-14 09:44:11.551676] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.330 [2024-07-14 09:44:11.551930] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.330 [2024-07-14 09:44:11.551953] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.330 [2024-07-14 09:44:11.551968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.330 [2024-07-14 09:44:11.555554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.330 [2024-07-14 09:44:11.564873] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.330 [2024-07-14 09:44:11.565357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.330 [2024-07-14 09:44:11.565387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.330 [2024-07-14 09:44:11.565405] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.330 [2024-07-14 09:44:11.565644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.330 [2024-07-14 09:44:11.565897] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.330 [2024-07-14 09:44:11.565927] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.330 [2024-07-14 09:44:11.565942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.330 [2024-07-14 09:44:11.569529] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.330 [2024-07-14 09:44:11.578841] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.330 [2024-07-14 09:44:11.579318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.330 [2024-07-14 09:44:11.579349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.330 [2024-07-14 09:44:11.579372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.330 [2024-07-14 09:44:11.579612] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.330 [2024-07-14 09:44:11.579854] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.330 [2024-07-14 09:44:11.579887] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.330 [2024-07-14 09:44:11.579903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.330 [2024-07-14 09:44:11.583486] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.330 [2024-07-14 09:44:11.592792] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.330 [2024-07-14 09:44:11.593267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.330 [2024-07-14 09:44:11.593298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.330 [2024-07-14 09:44:11.593315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.330 [2024-07-14 09:44:11.593553] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.330 [2024-07-14 09:44:11.593795] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.330 [2024-07-14 09:44:11.593818] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.330 [2024-07-14 09:44:11.593833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.330 [2024-07-14 09:44:11.597422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.330 [2024-07-14 09:44:11.606722] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.330 [2024-07-14 09:44:11.607376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.330 [2024-07-14 09:44:11.607429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.330 [2024-07-14 09:44:11.607447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.330 [2024-07-14 09:44:11.607685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.330 [2024-07-14 09:44:11.607938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.330 [2024-07-14 09:44:11.607962] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.330 [2024-07-14 09:44:11.607977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.330 [2024-07-14 09:44:11.611560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.330 [2024-07-14 09:44:11.620649] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.330 [2024-07-14 09:44:11.621201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.330 [2024-07-14 09:44:11.621231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.330 [2024-07-14 09:44:11.621249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.330 [2024-07-14 09:44:11.621487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.330 [2024-07-14 09:44:11.621729] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.330 [2024-07-14 09:44:11.621757] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.330 [2024-07-14 09:44:11.621773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.330 [2024-07-14 09:44:11.625371] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.330 [2024-07-14 09:44:11.634684] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.330 [2024-07-14 09:44:11.635147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.330 [2024-07-14 09:44:11.635177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.330 [2024-07-14 09:44:11.635195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.330 [2024-07-14 09:44:11.635433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.330 [2024-07-14 09:44:11.635675] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.330 [2024-07-14 09:44:11.635699] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.330 [2024-07-14 09:44:11.635714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.330 [2024-07-14 09:44:11.639303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.330 [2024-07-14 09:44:11.648610] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.330 [2024-07-14 09:44:11.649076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.330 [2024-07-14 09:44:11.649106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.330 [2024-07-14 09:44:11.649123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.330 [2024-07-14 09:44:11.649361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.330 [2024-07-14 09:44:11.649604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.330 [2024-07-14 09:44:11.649627] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.330 [2024-07-14 09:44:11.649642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.330 [2024-07-14 09:44:11.653230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.330 [2024-07-14 09:44:11.662537] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.330 [2024-07-14 09:44:11.663007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.330 [2024-07-14 09:44:11.663038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.330 [2024-07-14 09:44:11.663056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.330 [2024-07-14 09:44:11.663294] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.330 [2024-07-14 09:44:11.663536] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.330 [2024-07-14 09:44:11.663559] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.330 [2024-07-14 09:44:11.663574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.330 [2024-07-14 09:44:11.667167] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.330 [2024-07-14 09:44:11.676551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.330 [2024-07-14 09:44:11.677044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.330 [2024-07-14 09:44:11.677075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.330 [2024-07-14 09:44:11.677092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.330 [2024-07-14 09:44:11.677331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.330 [2024-07-14 09:44:11.677573] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.330 [2024-07-14 09:44:11.677595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.330 [2024-07-14 09:44:11.677610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.330 [2024-07-14 09:44:11.681200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.330 [2024-07-14 09:44:11.690497] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.330 [2024-07-14 09:44:11.690947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.330 [2024-07-14 09:44:11.690977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.330 [2024-07-14 09:44:11.690995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.330 [2024-07-14 09:44:11.691234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.330 [2024-07-14 09:44:11.691476] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.330 [2024-07-14 09:44:11.691499] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.330 [2024-07-14 09:44:11.691514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.330 [2024-07-14 09:44:11.695100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.330 [2024-07-14 09:44:11.704392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.330 [2024-07-14 09:44:11.704860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.330 [2024-07-14 09:44:11.704898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.330 [2024-07-14 09:44:11.704916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.330 [2024-07-14 09:44:11.705154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.330 [2024-07-14 09:44:11.705396] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.330 [2024-07-14 09:44:11.705419] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.330 [2024-07-14 09:44:11.705434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.330 [2024-07-14 09:44:11.709022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.330 [2024-07-14 09:44:11.718323] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.330 [2024-07-14 09:44:11.718806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.330 [2024-07-14 09:44:11.718836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.330 [2024-07-14 09:44:11.718853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.330 [2024-07-14 09:44:11.719107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.330 [2024-07-14 09:44:11.719350] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.330 [2024-07-14 09:44:11.719372] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.330 [2024-07-14 09:44:11.719387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.330 [2024-07-14 09:44:11.722977] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.330 [2024-07-14 09:44:11.732275] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.330 [2024-07-14 09:44:11.732743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.330 [2024-07-14 09:44:11.732773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.330 [2024-07-14 09:44:11.732791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.330 [2024-07-14 09:44:11.733041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.330 [2024-07-14 09:44:11.733283] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.330 [2024-07-14 09:44:11.733306] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.330 [2024-07-14 09:44:11.733321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.330 [2024-07-14 09:44:11.736908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.330 [2024-07-14 09:44:11.746210] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.330 [2024-07-14 09:44:11.746702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.330 [2024-07-14 09:44:11.746733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.330 [2024-07-14 09:44:11.746750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.330 [2024-07-14 09:44:11.747000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.330 [2024-07-14 09:44:11.747243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.330 [2024-07-14 09:44:11.747266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.330 [2024-07-14 09:44:11.747281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.330 [2024-07-14 09:44:11.750858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.330 [2024-07-14 09:44:11.760160] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.330 [2024-07-14 09:44:11.760609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.330 [2024-07-14 09:44:11.760639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.330 [2024-07-14 09:44:11.760657] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.330 [2024-07-14 09:44:11.760907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.330 [2024-07-14 09:44:11.761150] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.330 [2024-07-14 09:44:11.761173] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.330 [2024-07-14 09:44:11.761193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.330 [2024-07-14 09:44:11.764774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.330 [2024-07-14 09:44:11.774075] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.330 [2024-07-14 09:44:11.774568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.330 [2024-07-14 09:44:11.774598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.330 [2024-07-14 09:44:11.774615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.330 [2024-07-14 09:44:11.774854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.330 [2024-07-14 09:44:11.775107] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.330 [2024-07-14 09:44:11.775130] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.330 [2024-07-14 09:44:11.775145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.330 [2024-07-14 09:44:11.778789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.589 [2024-07-14 09:44:11.788148] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.589 [2024-07-14 09:44:11.788779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.589 [2024-07-14 09:44:11.788842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.589 [2024-07-14 09:44:11.788859] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.589 [2024-07-14 09:44:11.789110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.589 [2024-07-14 09:44:11.789352] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.589 [2024-07-14 09:44:11.789375] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.589 [2024-07-14 09:44:11.789389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.589 [2024-07-14 09:44:11.792976] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.589 [2024-07-14 09:44:11.802063] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.589 [2024-07-14 09:44:11.802510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.589 [2024-07-14 09:44:11.802540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.589 [2024-07-14 09:44:11.802558] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.589 [2024-07-14 09:44:11.802796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.589 [2024-07-14 09:44:11.803051] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.589 [2024-07-14 09:44:11.803075] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.589 [2024-07-14 09:44:11.803090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.589 [2024-07-14 09:44:11.806675] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.589 [2024-07-14 09:44:11.815984] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.589 [2024-07-14 09:44:11.816546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.589 [2024-07-14 09:44:11.816600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.589 [2024-07-14 09:44:11.816617] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.589 [2024-07-14 09:44:11.816855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.589 [2024-07-14 09:44:11.817108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.589 [2024-07-14 09:44:11.817132] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.589 [2024-07-14 09:44:11.817146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.589 [2024-07-14 09:44:11.820723] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.589 [2024-07-14 09:44:11.830024] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.589 [2024-07-14 09:44:11.830489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.589 [2024-07-14 09:44:11.830519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.589 [2024-07-14 09:44:11.830537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.590 [2024-07-14 09:44:11.830776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.590 [2024-07-14 09:44:11.831029] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.590 [2024-07-14 09:44:11.831053] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.590 [2024-07-14 09:44:11.831068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.590 [2024-07-14 09:44:11.834647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.590 [2024-07-14 09:44:11.843949] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.590 [2024-07-14 09:44:11.844409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.590 [2024-07-14 09:44:11.844439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.590 [2024-07-14 09:44:11.844456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.590 [2024-07-14 09:44:11.844695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.590 [2024-07-14 09:44:11.844948] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.590 [2024-07-14 09:44:11.844971] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.590 [2024-07-14 09:44:11.844986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.590 [2024-07-14 09:44:11.848562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.590 [2024-07-14 09:44:11.857864] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.590 [2024-07-14 09:44:11.858340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.590 [2024-07-14 09:44:11.858371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.590 [2024-07-14 09:44:11.858388] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.590 [2024-07-14 09:44:11.858627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.590 [2024-07-14 09:44:11.858886] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.590 [2024-07-14 09:44:11.858911] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.590 [2024-07-14 09:44:11.858926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.590 [2024-07-14 09:44:11.862504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.590 [2024-07-14 09:44:11.871806] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.590 [2024-07-14 09:44:11.872284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.590 [2024-07-14 09:44:11.872314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.590 [2024-07-14 09:44:11.872332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.590 [2024-07-14 09:44:11.872570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.590 [2024-07-14 09:44:11.872812] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.590 [2024-07-14 09:44:11.872835] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.590 [2024-07-14 09:44:11.872849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.590 [2024-07-14 09:44:11.876437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.590 [2024-07-14 09:44:11.885746] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.590 [2024-07-14 09:44:11.886200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.590 [2024-07-14 09:44:11.886232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.590 [2024-07-14 09:44:11.886249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.590 [2024-07-14 09:44:11.886487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.590 [2024-07-14 09:44:11.886729] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.590 [2024-07-14 09:44:11.886752] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.590 [2024-07-14 09:44:11.886767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.590 [2024-07-14 09:44:11.890353] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.590 [2024-07-14 09:44:11.899659] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.590 [2024-07-14 09:44:11.900137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.590 [2024-07-14 09:44:11.900168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.590 [2024-07-14 09:44:11.900193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.590 [2024-07-14 09:44:11.900431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.590 [2024-07-14 09:44:11.900674] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.590 [2024-07-14 09:44:11.900697] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.590 [2024-07-14 09:44:11.900712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.590 [2024-07-14 09:44:11.904301] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.590 [2024-07-14 09:44:11.913608] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.590 [2024-07-14 09:44:11.914095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.590 [2024-07-14 09:44:11.914125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.590 [2024-07-14 09:44:11.914143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.590 [2024-07-14 09:44:11.914381] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.590 [2024-07-14 09:44:11.914623] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.590 [2024-07-14 09:44:11.914645] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.590 [2024-07-14 09:44:11.914660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.590 [2024-07-14 09:44:11.918244] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.590 [2024-07-14 09:44:11.927546] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.590 [2024-07-14 09:44:11.927993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.590 [2024-07-14 09:44:11.928025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.590 [2024-07-14 09:44:11.928042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.590 [2024-07-14 09:44:11.928281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.590 [2024-07-14 09:44:11.928523] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.590 [2024-07-14 09:44:11.928545] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.590 [2024-07-14 09:44:11.928560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.590 [2024-07-14 09:44:11.932149] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.590 [2024-07-14 09:44:11.941443] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.590 [2024-07-14 09:44:11.941911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.590 [2024-07-14 09:44:11.941941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.590 [2024-07-14 09:44:11.941959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.590 [2024-07-14 09:44:11.942198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.590 [2024-07-14 09:44:11.942440] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.590 [2024-07-14 09:44:11.942462] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.590 [2024-07-14 09:44:11.942477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.590 [2024-07-14 09:44:11.946064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.590 [2024-07-14 09:44:11.955357] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.590 [2024-07-14 09:44:11.955840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.590 [2024-07-14 09:44:11.955877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.590 [2024-07-14 09:44:11.955902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.590 [2024-07-14 09:44:11.956141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.590 [2024-07-14 09:44:11.956383] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.590 [2024-07-14 09:44:11.956406] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.590 [2024-07-14 09:44:11.956421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.590 [2024-07-14 09:44:11.960007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.590 [2024-07-14 09:44:11.969306] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.590 [2024-07-14 09:44:11.969750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.590 [2024-07-14 09:44:11.969780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.590 [2024-07-14 09:44:11.969798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.590 [2024-07-14 09:44:11.970046] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.590 [2024-07-14 09:44:11.970288] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.590 [2024-07-14 09:44:11.970311] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.590 [2024-07-14 09:44:11.970326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.590 [2024-07-14 09:44:11.973910] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.590 [2024-07-14 09:44:11.983209] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.590 [2024-07-14 09:44:11.983677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.590 [2024-07-14 09:44:11.983707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.591 [2024-07-14 09:44:11.983724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.591 [2024-07-14 09:44:11.983973] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.591 [2024-07-14 09:44:11.984215] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.591 [2024-07-14 09:44:11.984238] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.591 [2024-07-14 09:44:11.984253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.591 [2024-07-14 09:44:11.987825] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.591 [2024-07-14 09:44:11.997122] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.591 [2024-07-14 09:44:11.997599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.591 [2024-07-14 09:44:11.997630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.591 [2024-07-14 09:44:11.997647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.591 [2024-07-14 09:44:11.997895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.591 [2024-07-14 09:44:11.998138] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.591 [2024-07-14 09:44:11.998165] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.591 [2024-07-14 09:44:11.998181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.591 [2024-07-14 09:44:12.001761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.591 [2024-07-14 09:44:12.011073] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.591 [2024-07-14 09:44:12.011542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.591 [2024-07-14 09:44:12.011573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.591 [2024-07-14 09:44:12.011590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.591 [2024-07-14 09:44:12.011828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.591 [2024-07-14 09:44:12.012080] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.591 [2024-07-14 09:44:12.012104] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.591 [2024-07-14 09:44:12.012119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.591 [2024-07-14 09:44:12.015694] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.591 [2024-07-14 09:44:12.024999] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.591 [2024-07-14 09:44:12.025470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.591 [2024-07-14 09:44:12.025500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.591 [2024-07-14 09:44:12.025518] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.591 [2024-07-14 09:44:12.025757] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.591 [2024-07-14 09:44:12.026009] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.591 [2024-07-14 09:44:12.026033] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.591 [2024-07-14 09:44:12.026048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.591 [2024-07-14 09:44:12.029624] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.591 [2024-07-14 09:44:12.039053] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.591 [2024-07-14 09:44:12.039500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.591 [2024-07-14 09:44:12.039531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.591 [2024-07-14 09:44:12.039549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.591 [2024-07-14 09:44:12.039804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.591 [2024-07-14 09:44:12.040058] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.591 [2024-07-14 09:44:12.040082] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.591 [2024-07-14 09:44:12.040097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.850 [2024-07-14 09:44:12.043770] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.850 [2024-07-14 09:44:12.052981] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.850 [2024-07-14 09:44:12.053454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.850 [2024-07-14 09:44:12.053485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.850 [2024-07-14 09:44:12.053503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.850 [2024-07-14 09:44:12.053742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.850 [2024-07-14 09:44:12.053994] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.850 [2024-07-14 09:44:12.054017] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.850 [2024-07-14 09:44:12.054032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.850 [2024-07-14 09:44:12.057611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.850 [2024-07-14 09:44:12.066916] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.850 [2024-07-14 09:44:12.067365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.850 [2024-07-14 09:44:12.067395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.850 [2024-07-14 09:44:12.067413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.850 [2024-07-14 09:44:12.067651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.850 [2024-07-14 09:44:12.067906] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.850 [2024-07-14 09:44:12.067929] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.850 [2024-07-14 09:44:12.067944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.850 [2024-07-14 09:44:12.071522] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.850 [2024-07-14 09:44:12.080818] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.850 [2024-07-14 09:44:12.081269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.850 [2024-07-14 09:44:12.081299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.850 [2024-07-14 09:44:12.081316] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.850 [2024-07-14 09:44:12.081555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.850 [2024-07-14 09:44:12.081796] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.850 [2024-07-14 09:44:12.081819] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.850 [2024-07-14 09:44:12.081833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.850 [2024-07-14 09:44:12.085421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.850 [2024-07-14 09:44:12.094710] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.850 [2024-07-14 09:44:12.095180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.850 [2024-07-14 09:44:12.095210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.850 [2024-07-14 09:44:12.095227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.850 [2024-07-14 09:44:12.095471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.850 [2024-07-14 09:44:12.095713] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.850 [2024-07-14 09:44:12.095736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.850 [2024-07-14 09:44:12.095750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.850 [2024-07-14 09:44:12.099338] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.850 [2024-07-14 09:44:12.108628] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.850 [2024-07-14 09:44:12.109085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.850 [2024-07-14 09:44:12.109116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.850 [2024-07-14 09:44:12.109133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.850 [2024-07-14 09:44:12.109373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.850 [2024-07-14 09:44:12.109614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.850 [2024-07-14 09:44:12.109636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.850 [2024-07-14 09:44:12.109651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.850 [2024-07-14 09:44:12.113245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.850 [2024-07-14 09:44:12.122539] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.850 [2024-07-14 09:44:12.122991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.850 [2024-07-14 09:44:12.123022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.850 [2024-07-14 09:44:12.123040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.850 [2024-07-14 09:44:12.123279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.850 [2024-07-14 09:44:12.123521] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.850 [2024-07-14 09:44:12.123544] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.850 [2024-07-14 09:44:12.123558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.850 [2024-07-14 09:44:12.127147] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.850 [2024-07-14 09:44:12.136445] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.850 [2024-07-14 09:44:12.136898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.850 [2024-07-14 09:44:12.136929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.850 [2024-07-14 09:44:12.136946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.850 [2024-07-14 09:44:12.137185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.850 [2024-07-14 09:44:12.137426] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.850 [2024-07-14 09:44:12.137449] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.850 [2024-07-14 09:44:12.137473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.850 [2024-07-14 09:44:12.141063] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.850 [2024-07-14 09:44:12.150354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.850 [2024-07-14 09:44:12.150795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.850 [2024-07-14 09:44:12.150826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.850 [2024-07-14 09:44:12.150843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.851 [2024-07-14 09:44:12.151092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.851 [2024-07-14 09:44:12.151335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.851 [2024-07-14 09:44:12.151358] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.851 [2024-07-14 09:44:12.151373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.851 [2024-07-14 09:44:12.154958] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.851 [2024-07-14 09:44:12.164252] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.851 [2024-07-14 09:44:12.164730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.851 [2024-07-14 09:44:12.164761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.851 [2024-07-14 09:44:12.164779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.851 [2024-07-14 09:44:12.165030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.851 [2024-07-14 09:44:12.165272] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.851 [2024-07-14 09:44:12.165295] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.851 [2024-07-14 09:44:12.165310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.851 [2024-07-14 09:44:12.168896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.851 [2024-07-14 09:44:12.178191] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.851 [2024-07-14 09:44:12.178630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.851 [2024-07-14 09:44:12.178661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.851 [2024-07-14 09:44:12.178678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.851 [2024-07-14 09:44:12.178928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.851 [2024-07-14 09:44:12.179170] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.851 [2024-07-14 09:44:12.179193] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.851 [2024-07-14 09:44:12.179207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.851 [2024-07-14 09:44:12.182785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.851 [2024-07-14 09:44:12.192083] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.851 [2024-07-14 09:44:12.192517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.851 [2024-07-14 09:44:12.192548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.851 [2024-07-14 09:44:12.192566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.851 [2024-07-14 09:44:12.192806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.851 [2024-07-14 09:44:12.193059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.851 [2024-07-14 09:44:12.193082] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.851 [2024-07-14 09:44:12.193097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.851 [2024-07-14 09:44:12.196675] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.851 [2024-07-14 09:44:12.205979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.851 [2024-07-14 09:44:12.206450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.851 [2024-07-14 09:44:12.206480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.851 [2024-07-14 09:44:12.206498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.851 [2024-07-14 09:44:12.206736] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.851 [2024-07-14 09:44:12.206989] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.851 [2024-07-14 09:44:12.207012] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.851 [2024-07-14 09:44:12.207028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.851 [2024-07-14 09:44:12.210609] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.851 [2024-07-14 09:44:12.219908] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.851 [2024-07-14 09:44:12.220385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.851 [2024-07-14 09:44:12.220416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.851 [2024-07-14 09:44:12.220434] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.851 [2024-07-14 09:44:12.220673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.851 [2024-07-14 09:44:12.220929] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.851 [2024-07-14 09:44:12.220961] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.851 [2024-07-14 09:44:12.220976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.851 [2024-07-14 09:44:12.224555] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.851 [2024-07-14 09:44:12.233857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.851 [2024-07-14 09:44:12.234344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.851 [2024-07-14 09:44:12.234374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.851 [2024-07-14 09:44:12.234392] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.851 [2024-07-14 09:44:12.234630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.851 [2024-07-14 09:44:12.234887] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.851 [2024-07-14 09:44:12.234911] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.851 [2024-07-14 09:44:12.234926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.851 [2024-07-14 09:44:12.238504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.851 [2024-07-14 09:44:12.247803] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.851 [2024-07-14 09:44:12.248233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.851 [2024-07-14 09:44:12.248264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.851 [2024-07-14 09:44:12.248282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.851 [2024-07-14 09:44:12.248520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.851 [2024-07-14 09:44:12.248763] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.851 [2024-07-14 09:44:12.248785] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.851 [2024-07-14 09:44:12.248800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.851 [2024-07-14 09:44:12.252384] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.851 [2024-07-14 09:44:12.261679] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.851 [2024-07-14 09:44:12.262182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.851 [2024-07-14 09:44:12.262212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.851 [2024-07-14 09:44:12.262230] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.851 [2024-07-14 09:44:12.262468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.851 [2024-07-14 09:44:12.262711] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.851 [2024-07-14 09:44:12.262733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.851 [2024-07-14 09:44:12.262748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.851 [2024-07-14 09:44:12.266330] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.851 [2024-07-14 09:44:12.275624] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.851 [2024-07-14 09:44:12.276100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.851 [2024-07-14 09:44:12.276130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.851 [2024-07-14 09:44:12.276147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.851 [2024-07-14 09:44:12.276385] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.852 [2024-07-14 09:44:12.276627] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.852 [2024-07-14 09:44:12.276650] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.852 [2024-07-14 09:44:12.276665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.852 [2024-07-14 09:44:12.280257] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.852 [2024-07-14 09:44:12.289577] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:27.852 [2024-07-14 09:44:12.290055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.852 [2024-07-14 09:44:12.290086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:27.852 [2024-07-14 09:44:12.290104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:27.852 [2024-07-14 09:44:12.290344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:27.852 [2024-07-14 09:44:12.290585] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:27.852 [2024-07-14 09:44:12.290608] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:27.852 [2024-07-14 09:44:12.290623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:27.852 [2024-07-14 09:44:12.294210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.111 [2024-07-14 09:44:12.303538] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.111 [2024-07-14 09:44:12.304010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.111 [2024-07-14 09:44:12.304042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.111 [2024-07-14 09:44:12.304060] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.111 [2024-07-14 09:44:12.304300] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.111 [2024-07-14 09:44:12.304541] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.111 [2024-07-14 09:44:12.304564] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.111 [2024-07-14 09:44:12.304579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.111 [2024-07-14 09:44:12.308290] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.111 [2024-07-14 09:44:12.317590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.111 [2024-07-14 09:44:12.318076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.111 [2024-07-14 09:44:12.318108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.111 [2024-07-14 09:44:12.318126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.111 [2024-07-14 09:44:12.318365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.111 [2024-07-14 09:44:12.318606] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.111 [2024-07-14 09:44:12.318630] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.111 [2024-07-14 09:44:12.318645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.111 [2024-07-14 09:44:12.322230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.111 [2024-07-14 09:44:12.331526] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.111 [2024-07-14 09:44:12.331992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.111 [2024-07-14 09:44:12.332023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.111 [2024-07-14 09:44:12.332046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.111 [2024-07-14 09:44:12.332287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.111 [2024-07-14 09:44:12.332528] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.111 [2024-07-14 09:44:12.332551] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.111 [2024-07-14 09:44:12.332566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.111 [2024-07-14 09:44:12.336155] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.111 [2024-07-14 09:44:12.345451] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.111 [2024-07-14 09:44:12.345927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.111 [2024-07-14 09:44:12.345958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.111 [2024-07-14 09:44:12.345975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.111 [2024-07-14 09:44:12.346214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.111 [2024-07-14 09:44:12.346455] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.111 [2024-07-14 09:44:12.346478] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.111 [2024-07-14 09:44:12.346493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.111 [2024-07-14 09:44:12.350084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.111 [2024-07-14 09:44:12.359401] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.112 [2024-07-14 09:44:12.359827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.112 [2024-07-14 09:44:12.359858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.112 [2024-07-14 09:44:12.359887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.112 [2024-07-14 09:44:12.360128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.112 [2024-07-14 09:44:12.360370] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.112 [2024-07-14 09:44:12.360393] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.112 [2024-07-14 09:44:12.360408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.112 [2024-07-14 09:44:12.363990] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.112 [2024-07-14 09:44:12.373309] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.112 [2024-07-14 09:44:12.373780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.112 [2024-07-14 09:44:12.373811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.112 [2024-07-14 09:44:12.373829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.112 [2024-07-14 09:44:12.374075] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.112 [2024-07-14 09:44:12.374317] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.112 [2024-07-14 09:44:12.374345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.112 [2024-07-14 09:44:12.374361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.112 [2024-07-14 09:44:12.377948] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.112 [2024-07-14 09:44:12.387272] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.112 [2024-07-14 09:44:12.387716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.112 [2024-07-14 09:44:12.387746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.112 [2024-07-14 09:44:12.387764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.112 [2024-07-14 09:44:12.388012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.112 [2024-07-14 09:44:12.388255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.112 [2024-07-14 09:44:12.388277] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.112 [2024-07-14 09:44:12.388292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.112 [2024-07-14 09:44:12.391875] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.112 [2024-07-14 09:44:12.401172] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.112 [2024-07-14 09:44:12.401613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.112 [2024-07-14 09:44:12.401644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.112 [2024-07-14 09:44:12.401661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.112 [2024-07-14 09:44:12.401909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.112 [2024-07-14 09:44:12.402152] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.112 [2024-07-14 09:44:12.402175] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.112 [2024-07-14 09:44:12.402190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.112 [2024-07-14 09:44:12.406010] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.112 [2024-07-14 09:44:12.415108] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.112 [2024-07-14 09:44:12.415559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.112 [2024-07-14 09:44:12.415590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.112 [2024-07-14 09:44:12.415608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.112 [2024-07-14 09:44:12.415847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.112 [2024-07-14 09:44:12.416099] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.112 [2024-07-14 09:44:12.416123] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.112 [2024-07-14 09:44:12.416138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.112 [2024-07-14 09:44:12.419719] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.112 [2024-07-14 09:44:12.429021] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.112 [2024-07-14 09:44:12.429500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.112 [2024-07-14 09:44:12.429531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.112 [2024-07-14 09:44:12.429549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.112 [2024-07-14 09:44:12.429787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.112 [2024-07-14 09:44:12.430039] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.112 [2024-07-14 09:44:12.430063] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.112 [2024-07-14 09:44:12.430078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.112 [2024-07-14 09:44:12.433656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.112 [2024-07-14 09:44:12.442958] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.112 [2024-07-14 09:44:12.443414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.112 [2024-07-14 09:44:12.443446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.112 [2024-07-14 09:44:12.443464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.112 [2024-07-14 09:44:12.443703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.112 [2024-07-14 09:44:12.443955] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.112 [2024-07-14 09:44:12.443979] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.112 [2024-07-14 09:44:12.443995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.112 [2024-07-14 09:44:12.447573] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.112 [2024-07-14 09:44:12.456871] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.112 [2024-07-14 09:44:12.457315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.112 [2024-07-14 09:44:12.457346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.112 [2024-07-14 09:44:12.457363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.112 [2024-07-14 09:44:12.457602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.112 [2024-07-14 09:44:12.457843] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.112 [2024-07-14 09:44:12.457875] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.112 [2024-07-14 09:44:12.457894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.112 [2024-07-14 09:44:12.461472] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.112 [2024-07-14 09:44:12.470767] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.113 [2024-07-14 09:44:12.471257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.113 [2024-07-14 09:44:12.471287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.113 [2024-07-14 09:44:12.471305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.113 [2024-07-14 09:44:12.471550] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.113 [2024-07-14 09:44:12.471792] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.113 [2024-07-14 09:44:12.471815] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.113 [2024-07-14 09:44:12.471830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.113 [2024-07-14 09:44:12.475431] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.113 [2024-07-14 09:44:12.484731] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.113 [2024-07-14 09:44:12.485199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.113 [2024-07-14 09:44:12.485231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.113 [2024-07-14 09:44:12.485248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.113 [2024-07-14 09:44:12.485488] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.113 [2024-07-14 09:44:12.485730] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.113 [2024-07-14 09:44:12.485752] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.113 [2024-07-14 09:44:12.485767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.113 [2024-07-14 09:44:12.489356] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.113 [2024-07-14 09:44:12.498663] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.113 [2024-07-14 09:44:12.499122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.113 [2024-07-14 09:44:12.499162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.113 [2024-07-14 09:44:12.499179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.113 [2024-07-14 09:44:12.499418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.113 [2024-07-14 09:44:12.499660] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.113 [2024-07-14 09:44:12.499683] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.113 [2024-07-14 09:44:12.499698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.113 [2024-07-14 09:44:12.503290] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.113 [2024-07-14 09:44:12.512637] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.113 [2024-07-14 09:44:12.513105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.113 [2024-07-14 09:44:12.513136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.113 [2024-07-14 09:44:12.513154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.113 [2024-07-14 09:44:12.513393] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.113 [2024-07-14 09:44:12.513634] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.113 [2024-07-14 09:44:12.513657] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.113 [2024-07-14 09:44:12.513678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.113 [2024-07-14 09:44:12.517272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.113 [2024-07-14 09:44:12.526572] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.113 [2024-07-14 09:44:12.527042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.113 [2024-07-14 09:44:12.527073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.113 [2024-07-14 09:44:12.527090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.113 [2024-07-14 09:44:12.527329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.113 [2024-07-14 09:44:12.527572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.113 [2024-07-14 09:44:12.527594] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.113 [2024-07-14 09:44:12.527609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.113 [2024-07-14 09:44:12.531202] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.113 [2024-07-14 09:44:12.540503] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.113 [2024-07-14 09:44:12.540976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.113 [2024-07-14 09:44:12.541007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.113 [2024-07-14 09:44:12.541024] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.113 [2024-07-14 09:44:12.541263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.113 [2024-07-14 09:44:12.541505] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.113 [2024-07-14 09:44:12.541529] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.113 [2024-07-14 09:44:12.541544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.113 [2024-07-14 09:44:12.545132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.113 [2024-07-14 09:44:12.554440] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.113 [2024-07-14 09:44:12.554893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.113 [2024-07-14 09:44:12.554929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.113 [2024-07-14 09:44:12.554946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.113 [2024-07-14 09:44:12.555186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.113 [2024-07-14 09:44:12.555428] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.113 [2024-07-14 09:44:12.555451] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.113 [2024-07-14 09:44:12.555465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.113 [2024-07-14 09:44:12.559116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.373 [2024-07-14 09:44:12.568492] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.373 [2024-07-14 09:44:12.568989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.373 [2024-07-14 09:44:12.569021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.373 [2024-07-14 09:44:12.569039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.373 [2024-07-14 09:44:12.569278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.373 [2024-07-14 09:44:12.569520] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.373 [2024-07-14 09:44:12.569544] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.373 [2024-07-14 09:44:12.569559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.373 [2024-07-14 09:44:12.573171] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.373 [2024-07-14 09:44:12.582476] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.373 [2024-07-14 09:44:12.582949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.373 [2024-07-14 09:44:12.582980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.373 [2024-07-14 09:44:12.582998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.373 [2024-07-14 09:44:12.583237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.373 [2024-07-14 09:44:12.583480] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.373 [2024-07-14 09:44:12.583503] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.373 [2024-07-14 09:44:12.583518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.373 [2024-07-14 09:44:12.587112] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.373 [2024-07-14 09:44:12.596419] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.373 [2024-07-14 09:44:12.596884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.373 [2024-07-14 09:44:12.596915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.373 [2024-07-14 09:44:12.596933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.373 [2024-07-14 09:44:12.597173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.373 [2024-07-14 09:44:12.597415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.373 [2024-07-14 09:44:12.597438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.373 [2024-07-14 09:44:12.597453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.373 [2024-07-14 09:44:12.601045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.373 [2024-07-14 09:44:12.610354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.373 [2024-07-14 09:44:12.610832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.373 [2024-07-14 09:44:12.610863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.373 [2024-07-14 09:44:12.610891] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.373 [2024-07-14 09:44:12.611136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.373 [2024-07-14 09:44:12.611378] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.373 [2024-07-14 09:44:12.611401] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.373 [2024-07-14 09:44:12.611416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.373 [2024-07-14 09:44:12.615006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.373 [2024-07-14 09:44:12.624310] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.373 [2024-07-14 09:44:12.624781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.373 [2024-07-14 09:44:12.624812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.373 [2024-07-14 09:44:12.624829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.373 [2024-07-14 09:44:12.625076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.373 [2024-07-14 09:44:12.625319] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.374 [2024-07-14 09:44:12.625342] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.374 [2024-07-14 09:44:12.625356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.374 [2024-07-14 09:44:12.628940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.374 [2024-07-14 09:44:12.638259] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.374 [2024-07-14 09:44:12.638700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.374 [2024-07-14 09:44:12.638730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.374 [2024-07-14 09:44:12.638748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.374 [2024-07-14 09:44:12.638996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.374 [2024-07-14 09:44:12.639239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.374 [2024-07-14 09:44:12.639262] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.374 [2024-07-14 09:44:12.639277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.374 [2024-07-14 09:44:12.642855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.374 [2024-07-14 09:44:12.652163] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.374 [2024-07-14 09:44:12.652603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.374 [2024-07-14 09:44:12.652633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.374 [2024-07-14 09:44:12.652651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.374 [2024-07-14 09:44:12.652898] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.374 [2024-07-14 09:44:12.653149] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.374 [2024-07-14 09:44:12.653171] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.374 [2024-07-14 09:44:12.653186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.374 [2024-07-14 09:44:12.656771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.374 [2024-07-14 09:44:12.666079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.374 [2024-07-14 09:44:12.666523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.374 [2024-07-14 09:44:12.666553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.374 [2024-07-14 09:44:12.666570] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.374 [2024-07-14 09:44:12.666809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.374 [2024-07-14 09:44:12.667068] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.374 [2024-07-14 09:44:12.667091] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.374 [2024-07-14 09:44:12.667106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.374 [2024-07-14 09:44:12.670681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.374 [2024-07-14 09:44:12.679990] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.374 [2024-07-14 09:44:12.680478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.374 [2024-07-14 09:44:12.680508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.374 [2024-07-14 09:44:12.680526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.374 [2024-07-14 09:44:12.680765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.374 [2024-07-14 09:44:12.681017] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.374 [2024-07-14 09:44:12.681040] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.374 [2024-07-14 09:44:12.681055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.374 [2024-07-14 09:44:12.684629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.374 [2024-07-14 09:44:12.693931] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.374 [2024-07-14 09:44:12.694382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.374 [2024-07-14 09:44:12.694412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.374 [2024-07-14 09:44:12.694430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.374 [2024-07-14 09:44:12.694669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.374 [2024-07-14 09:44:12.694922] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.374 [2024-07-14 09:44:12.694946] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.374 [2024-07-14 09:44:12.694961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.374 [2024-07-14 09:44:12.698538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.374 [2024-07-14 09:44:12.707974] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.374 [2024-07-14 09:44:12.708450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.374 [2024-07-14 09:44:12.708481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.374 [2024-07-14 09:44:12.708504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.374 [2024-07-14 09:44:12.708744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.374 [2024-07-14 09:44:12.708997] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.374 [2024-07-14 09:44:12.709021] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.374 [2024-07-14 09:44:12.709036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.374 [2024-07-14 09:44:12.712617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.374 [2024-07-14 09:44:12.721916] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.374 [2024-07-14 09:44:12.722389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.374 [2024-07-14 09:44:12.722419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.374 [2024-07-14 09:44:12.722437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.374 [2024-07-14 09:44:12.722676] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.374 [2024-07-14 09:44:12.722930] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.374 [2024-07-14 09:44:12.722955] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.374 [2024-07-14 09:44:12.722969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.374 [2024-07-14 09:44:12.726545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.374 [2024-07-14 09:44:12.735838] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.374 [2024-07-14 09:44:12.736316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.374 [2024-07-14 09:44:12.736347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.374 [2024-07-14 09:44:12.736364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.374 [2024-07-14 09:44:12.736603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.374 [2024-07-14 09:44:12.736846] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.374 [2024-07-14 09:44:12.736877] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.375 [2024-07-14 09:44:12.736895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.375 [2024-07-14 09:44:12.740475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.375 [2024-07-14 09:44:12.749768] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.375 [2024-07-14 09:44:12.750254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.375 [2024-07-14 09:44:12.750285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.375 [2024-07-14 09:44:12.750303] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.375 [2024-07-14 09:44:12.750541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.375 [2024-07-14 09:44:12.750789] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.375 [2024-07-14 09:44:12.750812] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.375 [2024-07-14 09:44:12.750827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.375 [2024-07-14 09:44:12.754415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.375 [2024-07-14 09:44:12.763712] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.375 [2024-07-14 09:44:12.764141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.375 [2024-07-14 09:44:12.764171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.375 [2024-07-14 09:44:12.764188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.375 [2024-07-14 09:44:12.764427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.375 [2024-07-14 09:44:12.764669] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.375 [2024-07-14 09:44:12.764691] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.375 [2024-07-14 09:44:12.764706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.375 [2024-07-14 09:44:12.768292] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.375 [2024-07-14 09:44:12.777584] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.375 [2024-07-14 09:44:12.778038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.375 [2024-07-14 09:44:12.778069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.375 [2024-07-14 09:44:12.778086] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.375 [2024-07-14 09:44:12.778325] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.375 [2024-07-14 09:44:12.778567] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.375 [2024-07-14 09:44:12.778589] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.375 [2024-07-14 09:44:12.778604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.375 [2024-07-14 09:44:12.782193] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.375 [2024-07-14 09:44:12.791501] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.375 [2024-07-14 09:44:12.791952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.375 [2024-07-14 09:44:12.791983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.375 [2024-07-14 09:44:12.792000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.375 [2024-07-14 09:44:12.792239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.375 [2024-07-14 09:44:12.792481] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.375 [2024-07-14 09:44:12.792503] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.375 [2024-07-14 09:44:12.792518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.375 [2024-07-14 09:44:12.796104] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.375 [2024-07-14 09:44:12.805405] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.375 [2024-07-14 09:44:12.805875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.375 [2024-07-14 09:44:12.805906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.375 [2024-07-14 09:44:12.805924] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.375 [2024-07-14 09:44:12.806162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.375 [2024-07-14 09:44:12.806404] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.375 [2024-07-14 09:44:12.806427] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.375 [2024-07-14 09:44:12.806442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.375 [2024-07-14 09:44:12.810031] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.375 [2024-07-14 09:44:12.819321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.375 [2024-07-14 09:44:12.819789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.375 [2024-07-14 09:44:12.819818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.375 [2024-07-14 09:44:12.819836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.375 [2024-07-14 09:44:12.820120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.375 [2024-07-14 09:44:12.820378] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.375 [2024-07-14 09:44:12.820403] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.375 [2024-07-14 09:44:12.820418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.375 [2024-07-14 09:44:12.824119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.641 [2024-07-14 09:44:12.833378] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.641 [2024-07-14 09:44:12.833853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.641 [2024-07-14 09:44:12.833891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.641 [2024-07-14 09:44:12.833909] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.641 [2024-07-14 09:44:12.834148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.641 [2024-07-14 09:44:12.834390] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.641 [2024-07-14 09:44:12.834413] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.641 [2024-07-14 09:44:12.834428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.641 [2024-07-14 09:44:12.838019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.641 [2024-07-14 09:44:12.847315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.641 [2024-07-14 09:44:12.847785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.641 [2024-07-14 09:44:12.847816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.641 [2024-07-14 09:44:12.847842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.641 [2024-07-14 09:44:12.848092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.641 [2024-07-14 09:44:12.848335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.641 [2024-07-14 09:44:12.848358] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.641 [2024-07-14 09:44:12.848372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.641 [2024-07-14 09:44:12.851956] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.641 [2024-07-14 09:44:12.861252] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.641 [2024-07-14 09:44:12.861725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.641 [2024-07-14 09:44:12.861756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.641 [2024-07-14 09:44:12.861773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.641 [2024-07-14 09:44:12.862023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.641 [2024-07-14 09:44:12.862265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.641 [2024-07-14 09:44:12.862288] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.641 [2024-07-14 09:44:12.862303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.641 [2024-07-14 09:44:12.865887] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.641 [2024-07-14 09:44:12.875183] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.641 [2024-07-14 09:44:12.875633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.641 [2024-07-14 09:44:12.875664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.641 [2024-07-14 09:44:12.875682] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.641 [2024-07-14 09:44:12.875932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.641 [2024-07-14 09:44:12.876174] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.641 [2024-07-14 09:44:12.876197] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.641 [2024-07-14 09:44:12.876212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.641 [2024-07-14 09:44:12.879789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.641 [2024-07-14 09:44:12.889090] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.641 [2024-07-14 09:44:12.889557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.641 [2024-07-14 09:44:12.889587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.641 [2024-07-14 09:44:12.889605] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.641 [2024-07-14 09:44:12.889843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.641 [2024-07-14 09:44:12.890092] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.641 [2024-07-14 09:44:12.890115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.641 [2024-07-14 09:44:12.890136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.641 [2024-07-14 09:44:12.893717] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.641 [2024-07-14 09:44:12.903018] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.641 [2024-07-14 09:44:12.903484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.641 [2024-07-14 09:44:12.903514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.641 [2024-07-14 09:44:12.903531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.641 [2024-07-14 09:44:12.903769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.642 [2024-07-14 09:44:12.904022] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.642 [2024-07-14 09:44:12.904046] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.642 [2024-07-14 09:44:12.904061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.642 [2024-07-14 09:44:12.907640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.642 [2024-07-14 09:44:12.916943] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.642 [2024-07-14 09:44:12.917384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.642 [2024-07-14 09:44:12.917414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.642 [2024-07-14 09:44:12.917432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.642 [2024-07-14 09:44:12.917670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.642 [2024-07-14 09:44:12.917924] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.642 [2024-07-14 09:44:12.917948] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.642 [2024-07-14 09:44:12.917963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.642 [2024-07-14 09:44:12.921538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.642 [2024-07-14 09:44:12.930831] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.642 [2024-07-14 09:44:12.931313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.642 [2024-07-14 09:44:12.931344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.642 [2024-07-14 09:44:12.931361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.642 [2024-07-14 09:44:12.931599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.642 [2024-07-14 09:44:12.931840] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.642 [2024-07-14 09:44:12.931862] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.642 [2024-07-14 09:44:12.931888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.642 [2024-07-14 09:44:12.935465] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.642 [2024-07-14 09:44:12.944751] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.642 [2024-07-14 09:44:12.945252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.642 [2024-07-14 09:44:12.945283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.642 [2024-07-14 09:44:12.945300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.642 [2024-07-14 09:44:12.945539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.642 [2024-07-14 09:44:12.945780] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.642 [2024-07-14 09:44:12.945803] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.642 [2024-07-14 09:44:12.945818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.642 [2024-07-14 09:44:12.949403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.642 [2024-07-14 09:44:12.958694] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.642 [2024-07-14 09:44:12.959146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.642 [2024-07-14 09:44:12.959176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.642 [2024-07-14 09:44:12.959194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.642 [2024-07-14 09:44:12.959432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.642 [2024-07-14 09:44:12.959674] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.642 [2024-07-14 09:44:12.959696] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.642 [2024-07-14 09:44:12.959711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.642 [2024-07-14 09:44:12.963297] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.642 [2024-07-14 09:44:12.972588] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.642 [2024-07-14 09:44:12.973049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.642 [2024-07-14 09:44:12.973080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.642 [2024-07-14 09:44:12.973097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.642 [2024-07-14 09:44:12.973337] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.642 [2024-07-14 09:44:12.973578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.642 [2024-07-14 09:44:12.973601] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.642 [2024-07-14 09:44:12.973615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.642 [2024-07-14 09:44:12.977203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.642 [2024-07-14 09:44:12.986490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.642 [2024-07-14 09:44:12.986962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.642 [2024-07-14 09:44:12.986992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.642 [2024-07-14 09:44:12.987010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.642 [2024-07-14 09:44:12.987254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.642 [2024-07-14 09:44:12.987496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.642 [2024-07-14 09:44:12.987519] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.642 [2024-07-14 09:44:12.987534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.642 [2024-07-14 09:44:12.991119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.642 [2024-07-14 09:44:13.000427] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.642 [2024-07-14 09:44:13.000906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.642 [2024-07-14 09:44:13.000936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.642 [2024-07-14 09:44:13.000954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.642 [2024-07-14 09:44:13.001193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.642 [2024-07-14 09:44:13.001435] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.642 [2024-07-14 09:44:13.001459] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.642 [2024-07-14 09:44:13.001474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.642 [2024-07-14 09:44:13.005061] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.642 [2024-07-14 09:44:13.014357] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.642 [2024-07-14 09:44:13.014818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.642 [2024-07-14 09:44:13.014848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.642 [2024-07-14 09:44:13.014874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.642 [2024-07-14 09:44:13.015116] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.642 [2024-07-14 09:44:13.015358] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.642 [2024-07-14 09:44:13.015380] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.642 [2024-07-14 09:44:13.015396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.642 [2024-07-14 09:44:13.018978] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.642 [2024-07-14 09:44:13.028298] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.643 [2024-07-14 09:44:13.028773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.643 [2024-07-14 09:44:13.028804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.643 [2024-07-14 09:44:13.028822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.643 [2024-07-14 09:44:13.029071] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.643 [2024-07-14 09:44:13.029313] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.643 [2024-07-14 09:44:13.029336] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.643 [2024-07-14 09:44:13.029351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.643 [2024-07-14 09:44:13.032938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.643 [2024-07-14 09:44:13.042231] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.643 [2024-07-14 09:44:13.042702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.643 [2024-07-14 09:44:13.042733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.643 [2024-07-14 09:44:13.042750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.643 [2024-07-14 09:44:13.043000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.643 [2024-07-14 09:44:13.043242] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.643 [2024-07-14 09:44:13.043265] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.643 [2024-07-14 09:44:13.043280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.643 [2024-07-14 09:44:13.046857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.643 [2024-07-14 09:44:13.056151] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.643 [2024-07-14 09:44:13.056632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.643 [2024-07-14 09:44:13.056663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.643 [2024-07-14 09:44:13.056680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.643 [2024-07-14 09:44:13.056929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.643 [2024-07-14 09:44:13.057171] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.643 [2024-07-14 09:44:13.057195] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.643 [2024-07-14 09:44:13.057209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.643 [2024-07-14 09:44:13.060786] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.643 [2024-07-14 09:44:13.070081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.643 [2024-07-14 09:44:13.070561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.643 [2024-07-14 09:44:13.070590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.643 [2024-07-14 09:44:13.070608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.643 [2024-07-14 09:44:13.070846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.643 [2024-07-14 09:44:13.071097] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.643 [2024-07-14 09:44:13.071121] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.643 [2024-07-14 09:44:13.071136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.643 [2024-07-14 09:44:13.074714] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.643 [2024-07-14 09:44:13.084087] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.643 [2024-07-14 09:44:13.084533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.643 [2024-07-14 09:44:13.084583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.643 [2024-07-14 09:44:13.084606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.643 [2024-07-14 09:44:13.084846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.643 [2024-07-14 09:44:13.085098] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.643 [2024-07-14 09:44:13.085121] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.643 [2024-07-14 09:44:13.085136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.931 [2024-07-14 09:44:13.088859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.931 [2024-07-14 09:44:13.097871] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.931 [2024-07-14 09:44:13.098338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.931 [2024-07-14 09:44:13.098367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.931 [2024-07-14 09:44:13.098384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.931 [2024-07-14 09:44:13.098626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.931 [2024-07-14 09:44:13.098857] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.931 [2024-07-14 09:44:13.098889] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.931 [2024-07-14 09:44:13.098903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.931 [2024-07-14 09:44:13.102297] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.932 [2024-07-14 09:44:13.111215] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.932 [2024-07-14 09:44:13.111656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.932 [2024-07-14 09:44:13.111684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.932 [2024-07-14 09:44:13.111700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.932 [2024-07-14 09:44:13.111963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.932 [2024-07-14 09:44:13.112185] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.932 [2024-07-14 09:44:13.112204] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.932 [2024-07-14 09:44:13.112217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.932 [2024-07-14 09:44:13.115277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.932 [2024-07-14 09:44:13.124488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.932 [2024-07-14 09:44:13.124861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.932 [2024-07-14 09:44:13.124909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.932 [2024-07-14 09:44:13.124924] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.932 [2024-07-14 09:44:13.125144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.932 [2024-07-14 09:44:13.125375] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.932 [2024-07-14 09:44:13.125394] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.932 [2024-07-14 09:44:13.125406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.932 [2024-07-14 09:44:13.128398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.932 [2024-07-14 09:44:13.137683] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.932 [2024-07-14 09:44:13.138139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.932 [2024-07-14 09:44:13.138167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.932 [2024-07-14 09:44:13.138182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.932 [2024-07-14 09:44:13.138434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.932 [2024-07-14 09:44:13.138666] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.932 [2024-07-14 09:44:13.138687] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.932 [2024-07-14 09:44:13.138700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.932 [2024-07-14 09:44:13.142048] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.932 [2024-07-14 09:44:13.151090] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.932 [2024-07-14 09:44:13.151596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.932 [2024-07-14 09:44:13.151624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.932 [2024-07-14 09:44:13.151640] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.932 [2024-07-14 09:44:13.151898] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.932 [2024-07-14 09:44:13.152103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.932 [2024-07-14 09:44:13.152122] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.932 [2024-07-14 09:44:13.152135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.932 [2024-07-14 09:44:13.155228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.932 [2024-07-14 09:44:13.164473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.932 [2024-07-14 09:44:13.164889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.932 [2024-07-14 09:44:13.164917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.932 [2024-07-14 09:44:13.164933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.932 [2024-07-14 09:44:13.165184] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.932 [2024-07-14 09:44:13.165376] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.932 [2024-07-14 09:44:13.165394] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.932 [2024-07-14 09:44:13.165406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.932 [2024-07-14 09:44:13.168405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.932 [2024-07-14 09:44:13.177712] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.932 [2024-07-14 09:44:13.178179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.932 [2024-07-14 09:44:13.178206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.932 [2024-07-14 09:44:13.178221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.932 [2024-07-14 09:44:13.178465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.932 [2024-07-14 09:44:13.178658] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.932 [2024-07-14 09:44:13.178676] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.932 [2024-07-14 09:44:13.178688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.932 [2024-07-14 09:44:13.181686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.932 [2024-07-14 09:44:13.191002] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.932 [2024-07-14 09:44:13.191451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.932 [2024-07-14 09:44:13.191478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.932 [2024-07-14 09:44:13.191494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.932 [2024-07-14 09:44:13.191742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.932 [2024-07-14 09:44:13.191978] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.932 [2024-07-14 09:44:13.191999] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.932 [2024-07-14 09:44:13.192012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.932 [2024-07-14 09:44:13.194993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.932 [2024-07-14 09:44:13.204292] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.932 [2024-07-14 09:44:13.204720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.932 [2024-07-14 09:44:13.204747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.932 [2024-07-14 09:44:13.204763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.932 [2024-07-14 09:44:13.205011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.932 [2024-07-14 09:44:13.205244] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.932 [2024-07-14 09:44:13.205263] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.932 [2024-07-14 09:44:13.205274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.932 [2024-07-14 09:44:13.208235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.932 [2024-07-14 09:44:13.217486] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.932 [2024-07-14 09:44:13.217993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.932 [2024-07-14 09:44:13.218022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.932 [2024-07-14 09:44:13.218046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.932 [2024-07-14 09:44:13.218297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.933 [2024-07-14 09:44:13.218489] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.933 [2024-07-14 09:44:13.218508] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.933 [2024-07-14 09:44:13.218520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.933 [2024-07-14 09:44:13.221518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.933 [2024-07-14 09:44:13.230803] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.933 [2024-07-14 09:44:13.231449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.933 [2024-07-14 09:44:13.231491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.933 [2024-07-14 09:44:13.231510] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.933 [2024-07-14 09:44:13.231756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.933 [2024-07-14 09:44:13.231999] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.933 [2024-07-14 09:44:13.232019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.933 [2024-07-14 09:44:13.232032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.933 [2024-07-14 09:44:13.235013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.933 [2024-07-14 09:44:13.244107] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.933 [2024-07-14 09:44:13.244560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.933 [2024-07-14 09:44:13.244588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.933 [2024-07-14 09:44:13.244604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.933 [2024-07-14 09:44:13.244852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.933 [2024-07-14 09:44:13.245080] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.933 [2024-07-14 09:44:13.245100] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.933 [2024-07-14 09:44:13.245113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.933 [2024-07-14 09:44:13.248075] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.933 [2024-07-14 09:44:13.257355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.933 [2024-07-14 09:44:13.257794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.933 [2024-07-14 09:44:13.257821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.933 [2024-07-14 09:44:13.257837] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.933 [2024-07-14 09:44:13.258102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.933 [2024-07-14 09:44:13.258313] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.933 [2024-07-14 09:44:13.258331] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.933 [2024-07-14 09:44:13.258352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.933 [2024-07-14 09:44:13.261313] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.933 [2024-07-14 09:44:13.270717] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.933 [2024-07-14 09:44:13.271152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.933 [2024-07-14 09:44:13.271196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.933 [2024-07-14 09:44:13.271212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.933 [2024-07-14 09:44:13.271460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.933 [2024-07-14 09:44:13.271654] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.933 [2024-07-14 09:44:13.271672] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.933 [2024-07-14 09:44:13.271684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.933 [2024-07-14 09:44:13.274789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.933 [2024-07-14 09:44:13.284722] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.933 [2024-07-14 09:44:13.285201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.933 [2024-07-14 09:44:13.285228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.933 [2024-07-14 09:44:13.285243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.933 [2024-07-14 09:44:13.285486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.933 [2024-07-14 09:44:13.285728] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.933 [2024-07-14 09:44:13.285751] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.933 [2024-07-14 09:44:13.285766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.933 [2024-07-14 09:44:13.289348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.933 [2024-07-14 09:44:13.298647] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.933 [2024-07-14 09:44:13.299125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.933 [2024-07-14 09:44:13.299156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.933 [2024-07-14 09:44:13.299173] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.933 [2024-07-14 09:44:13.299412] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.933 [2024-07-14 09:44:13.299654] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.933 [2024-07-14 09:44:13.299677] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.933 [2024-07-14 09:44:13.299692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.933 [2024-07-14 09:44:13.303282] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.933 [2024-07-14 09:44:13.312611] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.933 [2024-07-14 09:44:13.313081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.933 [2024-07-14 09:44:13.313112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.933 [2024-07-14 09:44:13.313129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.933 [2024-07-14 09:44:13.313368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.933 [2024-07-14 09:44:13.313610] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.933 [2024-07-14 09:44:13.313633] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.933 [2024-07-14 09:44:13.313648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.933 [2024-07-14 09:44:13.317237] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.933 [2024-07-14 09:44:13.326535] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.933 [2024-07-14 09:44:13.326995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.933 [2024-07-14 09:44:13.327028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.933 [2024-07-14 09:44:13.327047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.933 [2024-07-14 09:44:13.327287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.933 [2024-07-14 09:44:13.327529] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.933 [2024-07-14 09:44:13.327551] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.933 [2024-07-14 09:44:13.327566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.933 [2024-07-14 09:44:13.331159] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.934 [2024-07-14 09:44:13.340459] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.934 [2024-07-14 09:44:13.341013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.934 [2024-07-14 09:44:13.341044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.934 [2024-07-14 09:44:13.341062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.934 [2024-07-14 09:44:13.341301] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.934 [2024-07-14 09:44:13.341543] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.934 [2024-07-14 09:44:13.341566] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.934 [2024-07-14 09:44:13.341581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.934 [2024-07-14 09:44:13.345171] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.934 [2024-07-14 09:44:13.354474] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.934 [2024-07-14 09:44:13.354961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.934 [2024-07-14 09:44:13.354992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.934 [2024-07-14 09:44:13.355009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.934 [2024-07-14 09:44:13.355254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.934 [2024-07-14 09:44:13.355496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.934 [2024-07-14 09:44:13.355520] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.934 [2024-07-14 09:44:13.355535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.934 [2024-07-14 09:44:13.359123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.934 [2024-07-14 09:44:13.368424] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:28.934 [2024-07-14 09:44:13.368878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.934 [2024-07-14 09:44:13.368908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:28.934 [2024-07-14 09:44:13.368926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:28.934 [2024-07-14 09:44:13.369165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:28.934 [2024-07-14 09:44:13.369407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:28.934 [2024-07-14 09:44:13.369429] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:28.934 [2024-07-14 09:44:13.369444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:28.934 [2024-07-14 09:44:13.373035] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:28.934 [2024-07-14 09:44:13.382564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.202 [2024-07-14 09:44:13.383041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.202 [2024-07-14 09:44:13.383073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.203 [2024-07-14 09:44:13.383091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.203 [2024-07-14 09:44:13.383330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.203 [2024-07-14 09:44:13.383590] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.203 [2024-07-14 09:44:13.383615] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.203 [2024-07-14 09:44:13.383630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.203 [2024-07-14 09:44:13.387238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.203 [2024-07-14 09:44:13.396443] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.203 [2024-07-14 09:44:13.396893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.203 [2024-07-14 09:44:13.396925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.203 [2024-07-14 09:44:13.396943] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.203 [2024-07-14 09:44:13.397183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.203 [2024-07-14 09:44:13.397425] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.203 [2024-07-14 09:44:13.397448] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.203 [2024-07-14 09:44:13.397469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.203 [2024-07-14 09:44:13.401062] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.203 [2024-07-14 09:44:13.410368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.203 [2024-07-14 09:44:13.410837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.203 [2024-07-14 09:44:13.410874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.203 [2024-07-14 09:44:13.410894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.203 [2024-07-14 09:44:13.411133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.203 [2024-07-14 09:44:13.411375] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.203 [2024-07-14 09:44:13.411398] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.203 [2024-07-14 09:44:13.411413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.203 [2024-07-14 09:44:13.415003] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.203 [2024-07-14 09:44:13.424299] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.203 [2024-07-14 09:44:13.424760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.203 [2024-07-14 09:44:13.424791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.203 [2024-07-14 09:44:13.424808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.203 [2024-07-14 09:44:13.425289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.203 [2024-07-14 09:44:13.425533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.203 [2024-07-14 09:44:13.425556] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.203 [2024-07-14 09:44:13.425571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.203 [2024-07-14 09:44:13.429166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.203 [2024-07-14 09:44:13.438273] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.203 [2024-07-14 09:44:13.438759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.203 [2024-07-14 09:44:13.438789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.203 [2024-07-14 09:44:13.438807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.203 [2024-07-14 09:44:13.439055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.203 [2024-07-14 09:44:13.439298] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.203 [2024-07-14 09:44:13.439321] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.203 [2024-07-14 09:44:13.439336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.203 [2024-07-14 09:44:13.442923] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.203 [2024-07-14 09:44:13.452223] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.203 [2024-07-14 09:44:13.452695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.203 [2024-07-14 09:44:13.452731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.203 [2024-07-14 09:44:13.452750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.203 [2024-07-14 09:44:13.452999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.203 [2024-07-14 09:44:13.453242] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.203 [2024-07-14 09:44:13.453265] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.203 [2024-07-14 09:44:13.453281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.203 [2024-07-14 09:44:13.456859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.203 [2024-07-14 09:44:13.466163] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.203 [2024-07-14 09:44:13.466699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.203 [2024-07-14 09:44:13.466729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.203 [2024-07-14 09:44:13.466747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.203 [2024-07-14 09:44:13.466996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.203 [2024-07-14 09:44:13.467239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.203 [2024-07-14 09:44:13.467262] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.203 [2024-07-14 09:44:13.467277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.203 [2024-07-14 09:44:13.470856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.203 [2024-07-14 09:44:13.480162] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.203 [2024-07-14 09:44:13.480647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.203 [2024-07-14 09:44:13.480678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.203 [2024-07-14 09:44:13.480696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.203 [2024-07-14 09:44:13.480944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.204 [2024-07-14 09:44:13.481187] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.204 [2024-07-14 09:44:13.481211] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.204 [2024-07-14 09:44:13.481227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.204 [2024-07-14 09:44:13.484801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.204 [2024-07-14 09:44:13.494098] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.204 [2024-07-14 09:44:13.494545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.204 [2024-07-14 09:44:13.494575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.204 [2024-07-14 09:44:13.494592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.204 [2024-07-14 09:44:13.494831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.204 [2024-07-14 09:44:13.495088] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.204 [2024-07-14 09:44:13.495112] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.204 [2024-07-14 09:44:13.495128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.204 [2024-07-14 09:44:13.498707] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.204 [2024-07-14 09:44:13.508027] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.204 [2024-07-14 09:44:13.508499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.204 [2024-07-14 09:44:13.508529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.204 [2024-07-14 09:44:13.508547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.204 [2024-07-14 09:44:13.508786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.204 [2024-07-14 09:44:13.509039] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.204 [2024-07-14 09:44:13.509063] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.204 [2024-07-14 09:44:13.509078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.204 [2024-07-14 09:44:13.512659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.204 [2024-07-14 09:44:13.521972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.204 [2024-07-14 09:44:13.522591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.204 [2024-07-14 09:44:13.522642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.204 [2024-07-14 09:44:13.522660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.204 [2024-07-14 09:44:13.522908] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.204 [2024-07-14 09:44:13.523160] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.204 [2024-07-14 09:44:13.523183] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.204 [2024-07-14 09:44:13.523198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.204 [2024-07-14 09:44:13.526776] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.204 [2024-07-14 09:44:13.535898] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.204 [2024-07-14 09:44:13.536350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.204 [2024-07-14 09:44:13.536381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.204 [2024-07-14 09:44:13.536399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.204 [2024-07-14 09:44:13.536637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.204 [2024-07-14 09:44:13.536888] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.204 [2024-07-14 09:44:13.536911] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.204 [2024-07-14 09:44:13.536926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.204 [2024-07-14 09:44:13.540510] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.204 [2024-07-14 09:44:13.549821] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.204 [2024-07-14 09:44:13.550414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.204 [2024-07-14 09:44:13.550470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.204 [2024-07-14 09:44:13.550487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.204 [2024-07-14 09:44:13.550725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.204 [2024-07-14 09:44:13.550977] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.204 [2024-07-14 09:44:13.551001] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.204 [2024-07-14 09:44:13.551016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.204 [2024-07-14 09:44:13.554598] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.204 [2024-07-14 09:44:13.563687] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.204 [2024-07-14 09:44:13.564142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.204 [2024-07-14 09:44:13.564173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.204 [2024-07-14 09:44:13.564191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.204 [2024-07-14 09:44:13.564430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.204 [2024-07-14 09:44:13.564672] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.204 [2024-07-14 09:44:13.564694] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.204 [2024-07-14 09:44:13.564709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.204 [2024-07-14 09:44:13.568308] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.204 [2024-07-14 09:44:13.577609] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.204 [2024-07-14 09:44:13.578053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.204 [2024-07-14 09:44:13.578084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.204 [2024-07-14 09:44:13.578101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.204 [2024-07-14 09:44:13.578340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.204 [2024-07-14 09:44:13.578598] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.204 [2024-07-14 09:44:13.578622] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.204 [2024-07-14 09:44:13.578637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.204 [2024-07-14 09:44:13.582289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.204 [2024-07-14 09:44:13.591597] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.204 [2024-07-14 09:44:13.592079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.204 [2024-07-14 09:44:13.592110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.205 [2024-07-14 09:44:13.592132] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.205 [2024-07-14 09:44:13.592372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.205 [2024-07-14 09:44:13.592614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.205 [2024-07-14 09:44:13.592637] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.205 [2024-07-14 09:44:13.592652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.205 [2024-07-14 09:44:13.596246] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.205 [2024-07-14 09:44:13.605543] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.205 [2024-07-14 09:44:13.606003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.205 [2024-07-14 09:44:13.606034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.205 [2024-07-14 09:44:13.606052] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.205 [2024-07-14 09:44:13.606290] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.205 [2024-07-14 09:44:13.606532] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.205 [2024-07-14 09:44:13.606555] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.205 [2024-07-14 09:44:13.606570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.205 [2024-07-14 09:44:13.610166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.205 [2024-07-14 09:44:13.619472] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.205 [2024-07-14 09:44:13.619925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.205 [2024-07-14 09:44:13.619955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.205 [2024-07-14 09:44:13.619973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.205 [2024-07-14 09:44:13.620211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.205 [2024-07-14 09:44:13.620453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.205 [2024-07-14 09:44:13.620476] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.205 [2024-07-14 09:44:13.620491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.205 [2024-07-14 09:44:13.624084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.205 [2024-07-14 09:44:13.633396] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.205 [2024-07-14 09:44:13.633948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.205 [2024-07-14 09:44:13.633980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.205 [2024-07-14 09:44:13.633998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.205 [2024-07-14 09:44:13.634237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.205 [2024-07-14 09:44:13.634478] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.205 [2024-07-14 09:44:13.634507] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.205 [2024-07-14 09:44:13.634523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.205 [2024-07-14 09:44:13.638122] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.205 [2024-07-14 09:44:13.647375] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.205 [2024-07-14 09:44:13.647825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.205 [2024-07-14 09:44:13.647857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.205 [2024-07-14 09:44:13.647885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.205 [2024-07-14 09:44:13.648126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.205 [2024-07-14 09:44:13.648368] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.205 [2024-07-14 09:44:13.648391] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.205 [2024-07-14 09:44:13.648407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.466 [2024-07-14 09:44:13.652067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.466 [2024-07-14 09:44:13.661592] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.466 [2024-07-14 09:44:13.662058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.466 [2024-07-14 09:44:13.662091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.466 [2024-07-14 09:44:13.662109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.466 [2024-07-14 09:44:13.662349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.466 [2024-07-14 09:44:13.662591] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.466 [2024-07-14 09:44:13.662614] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.466 [2024-07-14 09:44:13.662629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.466 [2024-07-14 09:44:13.666217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.466 [2024-07-14 09:44:13.675525] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.466 [2024-07-14 09:44:13.675981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.466 [2024-07-14 09:44:13.676012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.466 [2024-07-14 09:44:13.676030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.466 [2024-07-14 09:44:13.676269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.466 [2024-07-14 09:44:13.676511] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.466 [2024-07-14 09:44:13.676534] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.466 [2024-07-14 09:44:13.676549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.466 [2024-07-14 09:44:13.680136] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.466 [2024-07-14 09:44:13.689441] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.466 [2024-07-14 09:44:13.689895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.466 [2024-07-14 09:44:13.689932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.466 [2024-07-14 09:44:13.689950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.466 [2024-07-14 09:44:13.690189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.466 [2024-07-14 09:44:13.690431] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.466 [2024-07-14 09:44:13.690454] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.466 [2024-07-14 09:44:13.690469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.466 [2024-07-14 09:44:13.694060] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.466 [2024-07-14 09:44:13.703375] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.466 [2024-07-14 09:44:13.703861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.466 [2024-07-14 09:44:13.703899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.466 [2024-07-14 09:44:13.703917] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.466 [2024-07-14 09:44:13.704155] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.466 [2024-07-14 09:44:13.704397] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.466 [2024-07-14 09:44:13.704420] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.466 [2024-07-14 09:44:13.704436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.466 [2024-07-14 09:44:13.708024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.466 [2024-07-14 09:44:13.717341] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.466 [2024-07-14 09:44:13.717811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.466 [2024-07-14 09:44:13.717842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.466 [2024-07-14 09:44:13.717860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.466 [2024-07-14 09:44:13.718108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.466 [2024-07-14 09:44:13.718351] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.466 [2024-07-14 09:44:13.718374] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.466 [2024-07-14 09:44:13.718389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.466 [2024-07-14 09:44:13.721975] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.466 [2024-07-14 09:44:13.731278] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.466 [2024-07-14 09:44:13.731905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.466 [2024-07-14 09:44:13.731936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.466 [2024-07-14 09:44:13.731953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.466 [2024-07-14 09:44:13.732198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.466 [2024-07-14 09:44:13.732441] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.466 [2024-07-14 09:44:13.732464] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.466 [2024-07-14 09:44:13.732479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.466 [2024-07-14 09:44:13.736070] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.466 [2024-07-14 09:44:13.745163] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.466 [2024-07-14 09:44:13.745631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.466 [2024-07-14 09:44:13.745661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.466 [2024-07-14 09:44:13.745678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.466 [2024-07-14 09:44:13.745927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.466 [2024-07-14 09:44:13.746170] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.466 [2024-07-14 09:44:13.746193] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.466 [2024-07-14 09:44:13.746208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.466 [2024-07-14 09:44:13.749789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.466 [2024-07-14 09:44:13.759106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.466 [2024-07-14 09:44:13.759552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.466 [2024-07-14 09:44:13.759584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.466 [2024-07-14 09:44:13.759601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.466 [2024-07-14 09:44:13.759840] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.466 [2024-07-14 09:44:13.760090] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.466 [2024-07-14 09:44:13.760114] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.466 [2024-07-14 09:44:13.760129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.466 [2024-07-14 09:44:13.763708] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.466 [2024-07-14 09:44:13.773011] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.466 [2024-07-14 09:44:13.773505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.466 [2024-07-14 09:44:13.773553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.466 [2024-07-14 09:44:13.773571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.466 [2024-07-14 09:44:13.773810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.466 [2024-07-14 09:44:13.774061] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.466 [2024-07-14 09:44:13.774085] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.466 [2024-07-14 09:44:13.774105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.466 [2024-07-14 09:44:13.777682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.466 [2024-07-14 09:44:13.786994] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.466 [2024-07-14 09:44:13.787482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.466 [2024-07-14 09:44:13.787530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.466 [2024-07-14 09:44:13.787548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.466 [2024-07-14 09:44:13.787786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.466 [2024-07-14 09:44:13.788038] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.466 [2024-07-14 09:44:13.788062] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.466 [2024-07-14 09:44:13.788076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.466 [2024-07-14 09:44:13.791684] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.466 [2024-07-14 09:44:13.800988] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.466 [2024-07-14 09:44:13.801487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.466 [2024-07-14 09:44:13.801535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.466 [2024-07-14 09:44:13.801552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.466 [2024-07-14 09:44:13.801791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.466 [2024-07-14 09:44:13.802041] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.466 [2024-07-14 09:44:13.802066] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.466 [2024-07-14 09:44:13.802081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.466 [2024-07-14 09:44:13.805659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.466 [2024-07-14 09:44:13.814968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.466 [2024-07-14 09:44:13.815407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.466 [2024-07-14 09:44:13.815438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.466 [2024-07-14 09:44:13.815455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.466 [2024-07-14 09:44:13.815693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.466 [2024-07-14 09:44:13.815948] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.466 [2024-07-14 09:44:13.815972] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.466 [2024-07-14 09:44:13.815986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.466 [2024-07-14 09:44:13.819565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.466 [2024-07-14 09:44:13.828859] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.466 [2024-07-14 09:44:13.829360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.466 [2024-07-14 09:44:13.829396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.466 [2024-07-14 09:44:13.829414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.466 [2024-07-14 09:44:13.829653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.466 [2024-07-14 09:44:13.829906] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.466 [2024-07-14 09:44:13.829930] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.466 [2024-07-14 09:44:13.829945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.466 [2024-07-14 09:44:13.833523] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.466 [2024-07-14 09:44:13.842825] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.466 [2024-07-14 09:44:13.843273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.466 [2024-07-14 09:44:13.843304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.466 [2024-07-14 09:44:13.843321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.466 [2024-07-14 09:44:13.843560] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.466 [2024-07-14 09:44:13.843803] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.466 [2024-07-14 09:44:13.843826] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.466 [2024-07-14 09:44:13.843841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.466 [2024-07-14 09:44:13.847429] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.466 [2024-07-14 09:44:13.856738] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.466 [2024-07-14 09:44:13.857192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.466 [2024-07-14 09:44:13.857222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.466 [2024-07-14 09:44:13.857239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.466 [2024-07-14 09:44:13.857478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.466 [2024-07-14 09:44:13.857720] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.466 [2024-07-14 09:44:13.857743] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.466 [2024-07-14 09:44:13.857759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.466 [2024-07-14 09:44:13.861349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.466 [2024-07-14 09:44:13.870646] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.466 [2024-07-14 09:44:13.871127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.466 [2024-07-14 09:44:13.871157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.466 [2024-07-14 09:44:13.871175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.466 [2024-07-14 09:44:13.871413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.466 [2024-07-14 09:44:13.871661] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.466 [2024-07-14 09:44:13.871684] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.466 [2024-07-14 09:44:13.871700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.466 [2024-07-14 09:44:13.875288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.466 [2024-07-14 09:44:13.884588] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.466 [2024-07-14 09:44:13.885038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.466 [2024-07-14 09:44:13.885069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.466 [2024-07-14 09:44:13.885086] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.466 [2024-07-14 09:44:13.885325] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.466 [2024-07-14 09:44:13.885567] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.466 [2024-07-14 09:44:13.885590] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.466 [2024-07-14 09:44:13.885605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.466 [2024-07-14 09:44:13.889191] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.466 [2024-07-14 09:44:13.898488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.467 [2024-07-14 09:44:13.898914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.467 [2024-07-14 09:44:13.898946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.467 [2024-07-14 09:44:13.898963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.467 [2024-07-14 09:44:13.899202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.467 [2024-07-14 09:44:13.899444] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.467 [2024-07-14 09:44:13.899467] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.467 [2024-07-14 09:44:13.899482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.467 [2024-07-14 09:44:13.903068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.467 [2024-07-14 09:44:13.912368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.467 [2024-07-14 09:44:13.912834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.467 [2024-07-14 09:44:13.912874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.467 [2024-07-14 09:44:13.912894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.467 [2024-07-14 09:44:13.913133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.467 [2024-07-14 09:44:13.913375] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.467 [2024-07-14 09:44:13.913399] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.467 [2024-07-14 09:44:13.913414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.467 [2024-07-14 09:44:13.917188] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.725 [2024-07-14 09:44:13.926216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.725 [2024-07-14 09:44:13.926675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.725 [2024-07-14 09:44:13.926706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.725 [2024-07-14 09:44:13.926724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.725 [2024-07-14 09:44:13.926973] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.725 [2024-07-14 09:44:13.927215] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.725 [2024-07-14 09:44:13.927239] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.725 [2024-07-14 09:44:13.927254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.725 [2024-07-14 09:44:13.930834] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.725 [2024-07-14 09:44:13.940135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.725 [2024-07-14 09:44:13.940603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.725 [2024-07-14 09:44:13.940633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.725 [2024-07-14 09:44:13.940651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.725 [2024-07-14 09:44:13.940901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.725 [2024-07-14 09:44:13.941144] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.725 [2024-07-14 09:44:13.941167] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.725 [2024-07-14 09:44:13.941181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.725 [2024-07-14 09:44:13.944757] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.725 [2024-07-14 09:44:13.954064] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.725 [2024-07-14 09:44:13.954529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.725 [2024-07-14 09:44:13.954560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.725 [2024-07-14 09:44:13.954577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.725 [2024-07-14 09:44:13.954816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.725 [2024-07-14 09:44:13.955068] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.725 [2024-07-14 09:44:13.955092] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.725 [2024-07-14 09:44:13.955107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.725 [2024-07-14 09:44:13.958686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.725 [2024-07-14 09:44:13.967994] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.725 [2024-07-14 09:44:13.968464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.725 [2024-07-14 09:44:13.968494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.725 [2024-07-14 09:44:13.968518] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.725 [2024-07-14 09:44:13.968757] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.725 [2024-07-14 09:44:13.969011] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.725 [2024-07-14 09:44:13.969035] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.725 [2024-07-14 09:44:13.969050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.725 [2024-07-14 09:44:13.972629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.725 [2024-07-14 09:44:13.981933] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.725 [2024-07-14 09:44:13.982402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.725 [2024-07-14 09:44:13.982432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.725 [2024-07-14 09:44:13.982450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.725 [2024-07-14 09:44:13.982688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.725 [2024-07-14 09:44:13.982942] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.725 [2024-07-14 09:44:13.982966] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.725 [2024-07-14 09:44:13.982981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.725 [2024-07-14 09:44:13.986560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.725 [2024-07-14 09:44:13.995855] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.725 [2024-07-14 09:44:13.996303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.725 [2024-07-14 09:44:13.996334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.725 [2024-07-14 09:44:13.996351] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.725 [2024-07-14 09:44:13.996590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.725 [2024-07-14 09:44:13.996832] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.725 [2024-07-14 09:44:13.996854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.725 [2024-07-14 09:44:13.996879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.725 [2024-07-14 09:44:14.000460] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.725 [2024-07-14 09:44:14.009757] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.725 [2024-07-14 09:44:14.010227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.725 [2024-07-14 09:44:14.010258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.725 [2024-07-14 09:44:14.010275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.725 [2024-07-14 09:44:14.010514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.725 [2024-07-14 09:44:14.010756] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.725 [2024-07-14 09:44:14.010785] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.725 [2024-07-14 09:44:14.010800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.725 [2024-07-14 09:44:14.014394] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.725 [2024-07-14 09:44:14.023690] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.725 [2024-07-14 09:44:14.024146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.725 [2024-07-14 09:44:14.024177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.725 [2024-07-14 09:44:14.024194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.725 [2024-07-14 09:44:14.024432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.725 [2024-07-14 09:44:14.024675] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.725 [2024-07-14 09:44:14.024698] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.725 [2024-07-14 09:44:14.024713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.725 [2024-07-14 09:44:14.028304] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.725 [2024-07-14 09:44:14.037607] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.725 [2024-07-14 09:44:14.038111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.725 [2024-07-14 09:44:14.038159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.725 [2024-07-14 09:44:14.038177] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.725 [2024-07-14 09:44:14.038415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.725 [2024-07-14 09:44:14.038657] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.725 [2024-07-14 09:44:14.038680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.725 [2024-07-14 09:44:14.038696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.725 [2024-07-14 09:44:14.042286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.725 [2024-07-14 09:44:14.051587] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.725 [2024-07-14 09:44:14.052091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.725 [2024-07-14 09:44:14.052140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.725 [2024-07-14 09:44:14.052157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.725 [2024-07-14 09:44:14.052395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.725 [2024-07-14 09:44:14.052638] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.725 [2024-07-14 09:44:14.052661] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.725 [2024-07-14 09:44:14.052676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.726 [2024-07-14 09:44:14.056265] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.726 [2024-07-14 09:44:14.065561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.726 [2024-07-14 09:44:14.066036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.726 [2024-07-14 09:44:14.066068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.726 [2024-07-14 09:44:14.066085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.726 [2024-07-14 09:44:14.066324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.726 [2024-07-14 09:44:14.066566] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.726 [2024-07-14 09:44:14.066589] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.726 [2024-07-14 09:44:14.066604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.726 [2024-07-14 09:44:14.070195] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.726 [2024-07-14 09:44:14.079496] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.726 [2024-07-14 09:44:14.079940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.726 [2024-07-14 09:44:14.079971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.726 [2024-07-14 09:44:14.079988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.726 [2024-07-14 09:44:14.080227] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.726 [2024-07-14 09:44:14.080469] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.726 [2024-07-14 09:44:14.080492] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.726 [2024-07-14 09:44:14.080507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.726 [2024-07-14 09:44:14.084097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.726 [2024-07-14 09:44:14.093398] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.726 [2024-07-14 09:44:14.093842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.726 [2024-07-14 09:44:14.093880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.726 [2024-07-14 09:44:14.093900] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.726 [2024-07-14 09:44:14.094139] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.726 [2024-07-14 09:44:14.094380] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.726 [2024-07-14 09:44:14.094404] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.726 [2024-07-14 09:44:14.094419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.726 [2024-07-14 09:44:14.098004] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.726 [2024-07-14 09:44:14.107297] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.726 [2024-07-14 09:44:14.107841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.726 [2024-07-14 09:44:14.107905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.726 [2024-07-14 09:44:14.107923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.726 [2024-07-14 09:44:14.108168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.726 [2024-07-14 09:44:14.108410] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.726 [2024-07-14 09:44:14.108433] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.726 [2024-07-14 09:44:14.108448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.726 [2024-07-14 09:44:14.112038] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.726 [2024-07-14 09:44:14.121332] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.726 [2024-07-14 09:44:14.121800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.726 [2024-07-14 09:44:14.121830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.726 [2024-07-14 09:44:14.121848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.726 [2024-07-14 09:44:14.122096] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.726 [2024-07-14 09:44:14.122338] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.726 [2024-07-14 09:44:14.122362] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.726 [2024-07-14 09:44:14.122377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.726 [2024-07-14 09:44:14.125960] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.726 [2024-07-14 09:44:14.135267] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.726 [2024-07-14 09:44:14.135733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.726 [2024-07-14 09:44:14.135764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.726 [2024-07-14 09:44:14.135781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.726 [2024-07-14 09:44:14.136030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.726 [2024-07-14 09:44:14.136273] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.726 [2024-07-14 09:44:14.136296] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.726 [2024-07-14 09:44:14.136311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.726 [2024-07-14 09:44:14.139901] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.726 [2024-07-14 09:44:14.149202] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.726 [2024-07-14 09:44:14.149669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.726 [2024-07-14 09:44:14.149699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.726 [2024-07-14 09:44:14.149717] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.726 [2024-07-14 09:44:14.149967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.726 [2024-07-14 09:44:14.150209] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.726 [2024-07-14 09:44:14.150232] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.726 [2024-07-14 09:44:14.150253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.726 [2024-07-14 09:44:14.153835] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.726 [2024-07-14 09:44:14.163135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.726 [2024-07-14 09:44:14.163611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.726 [2024-07-14 09:44:14.163641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.726 [2024-07-14 09:44:14.163659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.726 [2024-07-14 09:44:14.163909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.726 [2024-07-14 09:44:14.164151] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.726 [2024-07-14 09:44:14.164174] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.726 [2024-07-14 09:44:14.164188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.726 [2024-07-14 09:44:14.167764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.985 [2024-07-14 09:44:14.177277] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.985 [2024-07-14 09:44:14.177776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.985 [2024-07-14 09:44:14.177808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.985 [2024-07-14 09:44:14.177825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.985 [2024-07-14 09:44:14.178075] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.985 [2024-07-14 09:44:14.178318] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.985 [2024-07-14 09:44:14.178341] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.985 [2024-07-14 09:44:14.178356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.985 [2024-07-14 09:44:14.182055] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.985 [2024-07-14 09:44:14.191138] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.985 [2024-07-14 09:44:14.191593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.985 [2024-07-14 09:44:14.191624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.985 [2024-07-14 09:44:14.191642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.985 [2024-07-14 09:44:14.191895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.985 [2024-07-14 09:44:14.192138] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.985 [2024-07-14 09:44:14.192161] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.985 [2024-07-14 09:44:14.192176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.985 [2024-07-14 09:44:14.195753] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.985 [2024-07-14 09:44:14.205055] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.985 [2024-07-14 09:44:14.205526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.985 [2024-07-14 09:44:14.205562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.985 [2024-07-14 09:44:14.205580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.985 [2024-07-14 09:44:14.205819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.985 [2024-07-14 09:44:14.206071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.985 [2024-07-14 09:44:14.206095] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.985 [2024-07-14 09:44:14.206110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.985 [2024-07-14 09:44:14.209688] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.985 [2024-07-14 09:44:14.218995] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.985 [2024-07-14 09:44:14.219442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.985 [2024-07-14 09:44:14.219473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.985 [2024-07-14 09:44:14.219490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.985 [2024-07-14 09:44:14.219729] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.985 [2024-07-14 09:44:14.219982] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.985 [2024-07-14 09:44:14.220006] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.985 [2024-07-14 09:44:14.220021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.985 [2024-07-14 09:44:14.223599] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.985 [2024-07-14 09:44:14.232905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.985 [2024-07-14 09:44:14.233349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.985 [2024-07-14 09:44:14.233379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.985 [2024-07-14 09:44:14.233396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.985 [2024-07-14 09:44:14.233634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.985 [2024-07-14 09:44:14.233888] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.985 [2024-07-14 09:44:14.233912] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.985 [2024-07-14 09:44:14.233927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.985 [2024-07-14 09:44:14.237507] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.985 [2024-07-14 09:44:14.246814] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.985 [2024-07-14 09:44:14.247267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.985 [2024-07-14 09:44:14.247299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.985 [2024-07-14 09:44:14.247317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.985 [2024-07-14 09:44:14.247557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.985 [2024-07-14 09:44:14.247808] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.985 [2024-07-14 09:44:14.247832] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.985 [2024-07-14 09:44:14.247846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.985 [2024-07-14 09:44:14.251434] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.985 [2024-07-14 09:44:14.260729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.985 [2024-07-14 09:44:14.261175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.985 [2024-07-14 09:44:14.261206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.985 [2024-07-14 09:44:14.261223] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.985 [2024-07-14 09:44:14.261462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.985 [2024-07-14 09:44:14.261704] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.985 [2024-07-14 09:44:14.261727] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.985 [2024-07-14 09:44:14.261742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.985 [2024-07-14 09:44:14.265329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.985 [2024-07-14 09:44:14.274623] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.985 [2024-07-14 09:44:14.275082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.985 [2024-07-14 09:44:14.275112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.985 [2024-07-14 09:44:14.275130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.985 [2024-07-14 09:44:14.275369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.985 [2024-07-14 09:44:14.275611] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.985 [2024-07-14 09:44:14.275634] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.985 [2024-07-14 09:44:14.275649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.985 [2024-07-14 09:44:14.279238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.985 [2024-07-14 09:44:14.288542] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.985 [2024-07-14 09:44:14.289015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.985 [2024-07-14 09:44:14.289046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.985 [2024-07-14 09:44:14.289064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.985 [2024-07-14 09:44:14.289303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.985 [2024-07-14 09:44:14.289545] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.985 [2024-07-14 09:44:14.289568] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.985 [2024-07-14 09:44:14.289584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.985 [2024-07-14 09:44:14.293173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.985 [2024-07-14 09:44:14.302481] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.985 [2024-07-14 09:44:14.302961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.985 [2024-07-14 09:44:14.302992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.985 [2024-07-14 09:44:14.303010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.985 [2024-07-14 09:44:14.303248] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.985 [2024-07-14 09:44:14.303489] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.985 [2024-07-14 09:44:14.303512] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.985 [2024-07-14 09:44:14.303528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.985 [2024-07-14 09:44:14.307116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.985 [2024-07-14 09:44:14.316423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.985 [2024-07-14 09:44:14.316885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.985 [2024-07-14 09:44:14.316916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.985 [2024-07-14 09:44:14.316934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.985 [2024-07-14 09:44:14.317173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.985 [2024-07-14 09:44:14.317415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.985 [2024-07-14 09:44:14.317437] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.985 [2024-07-14 09:44:14.317452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.985 [2024-07-14 09:44:14.321044] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.985 [2024-07-14 09:44:14.330356] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.985 [2024-07-14 09:44:14.330825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.985 [2024-07-14 09:44:14.330855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.985 [2024-07-14 09:44:14.330882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.985 [2024-07-14 09:44:14.331122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.985 [2024-07-14 09:44:14.331365] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.985 [2024-07-14 09:44:14.331387] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.985 [2024-07-14 09:44:14.331402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.985 [2024-07-14 09:44:14.334988] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.985 [2024-07-14 09:44:14.344294] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.985 [2024-07-14 09:44:14.344773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.985 [2024-07-14 09:44:14.344803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.985 [2024-07-14 09:44:14.344828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.985 [2024-07-14 09:44:14.345079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.985 [2024-07-14 09:44:14.345322] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.985 [2024-07-14 09:44:14.345345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.985 [2024-07-14 09:44:14.345360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.985 [2024-07-14 09:44:14.348946] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.985 [2024-07-14 09:44:14.358244] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.985 [2024-07-14 09:44:14.358715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.985 [2024-07-14 09:44:14.358745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.986 [2024-07-14 09:44:14.358762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.986 [2024-07-14 09:44:14.359012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.986 [2024-07-14 09:44:14.359255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.986 [2024-07-14 09:44:14.359278] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.986 [2024-07-14 09:44:14.359294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.986 [2024-07-14 09:44:14.362883] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.986 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 894064 Killed "${NVMF_APP[@]}" "$@" 00:34:29.986 09:44:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:34:29.986 09:44:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:29.986 09:44:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:29.986 09:44:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:29.986 09:44:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:29.986 [2024-07-14 09:44:14.372201] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.986 [2024-07-14 09:44:14.372681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.986 [2024-07-14 09:44:14.372718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.986 [2024-07-14 09:44:14.372736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.986 [2024-07-14 09:44:14.372986] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.986 [2024-07-14 09:44:14.373229] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.986 [2024-07-14 09:44:14.373251] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.986 [2024-07-14 09:44:14.373268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.986 09:44:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=895017 00:34:29.986 09:44:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:29.986 09:44:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 895017 00:34:29.986 09:44:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 895017 ']' 00:34:29.986 09:44:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:29.986 09:44:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:29.986 09:44:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:29.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:29.986 09:44:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:29.986 09:44:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:29.986 [2024-07-14 09:44:14.376857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.986 [2024-07-14 09:44:14.386178] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.986 [2024-07-14 09:44:14.386652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.986 [2024-07-14 09:44:14.386683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.986 [2024-07-14 09:44:14.386701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.986 [2024-07-14 09:44:14.386951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.986 [2024-07-14 09:44:14.387193] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.986 [2024-07-14 09:44:14.387216] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.986 [2024-07-14 09:44:14.387231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.986 [2024-07-14 09:44:14.390808] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.986 [2024-07-14 09:44:14.400115] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.986 [2024-07-14 09:44:14.400576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.986 [2024-07-14 09:44:14.400606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.986 [2024-07-14 09:44:14.400624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.986 [2024-07-14 09:44:14.400863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.986 [2024-07-14 09:44:14.401115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.986 [2024-07-14 09:44:14.401138] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.986 [2024-07-14 09:44:14.401153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.986 [2024-07-14 09:44:14.404732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.986 [2024-07-14 09:44:14.414043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.986 [2024-07-14 09:44:14.414520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.986 [2024-07-14 09:44:14.414550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.986 [2024-07-14 09:44:14.414568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.986 [2024-07-14 09:44:14.414806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.986 [2024-07-14 09:44:14.415058] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.986 [2024-07-14 09:44:14.415081] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.986 [2024-07-14 09:44:14.415102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.986 [2024-07-14 09:44:14.418682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:29.986 [2024-07-14 09:44:14.421139] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:34:29.986 [2024-07-14 09:44:14.421215] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:29.986 [2024-07-14 09:44:14.428007] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.986 [2024-07-14 09:44:14.428486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.986 [2024-07-14 09:44:14.428517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:29.986 [2024-07-14 09:44:14.428545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:29.986 [2024-07-14 09:44:14.428784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:29.986 [2024-07-14 09:44:14.429035] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:29.986 [2024-07-14 09:44:14.429059] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:29.986 [2024-07-14 09:44:14.429075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:29.986 [2024-07-14 09:44:14.432703] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:30.245 [2024-07-14 09:44:14.442092] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:30.245 [2024-07-14 09:44:14.442542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.245 [2024-07-14 09:44:14.442573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:30.245 [2024-07-14 09:44:14.442591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:30.245 [2024-07-14 09:44:14.442830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:30.245 [2024-07-14 09:44:14.443082] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:30.245 [2024-07-14 09:44:14.443106] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:30.245 [2024-07-14 09:44:14.443121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:30.245 [2024-07-14 09:44:14.446902] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:30.245 EAL: No free 2048 kB hugepages reported on node 1 00:34:30.245 [2024-07-14 09:44:14.455989] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:30.245 [2024-07-14 09:44:14.456461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.245 [2024-07-14 09:44:14.456493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:30.245 [2024-07-14 09:44:14.456511] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:30.245 [2024-07-14 09:44:14.456750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:30.245 [2024-07-14 09:44:14.457002] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:30.245 [2024-07-14 09:44:14.457026] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:30.245 [2024-07-14 09:44:14.457047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:30.245 [2024-07-14 09:44:14.460635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:30.245 [2024-07-14 09:44:14.469934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:30.245 [2024-07-14 09:44:14.470382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.245 [2024-07-14 09:44:14.470412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:30.245 [2024-07-14 09:44:14.470430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:30.245 [2024-07-14 09:44:14.470669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:30.245 [2024-07-14 09:44:14.470921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:30.246 [2024-07-14 09:44:14.470945] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:30.246 [2024-07-14 09:44:14.470961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:30.246 [2024-07-14 09:44:14.474538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:30.246 [2024-07-14 09:44:14.483350] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:30.246 [2024-07-14 09:44:14.483782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.246 [2024-07-14 09:44:14.483810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:30.246 [2024-07-14 09:44:14.483826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:30.246 [2024-07-14 09:44:14.484062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:30.246 [2024-07-14 09:44:14.484298] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:30.246 [2024-07-14 09:44:14.484317] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:30.246 [2024-07-14 09:44:14.484330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:30.246 [2024-07-14 09:44:14.487381] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:30.246 [2024-07-14 09:44:14.488927] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:30.246 [2024-07-14 09:44:14.496781] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:30.246 [2024-07-14 09:44:14.497531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.246 [2024-07-14 09:44:14.497597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:30.246 [2024-07-14 09:44:14.497618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:30.246 [2024-07-14 09:44:14.497862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:30.246 [2024-07-14 09:44:14.498084] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:30.246 [2024-07-14 09:44:14.498104] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:30.246 [2024-07-14 09:44:14.498119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:30.246 [2024-07-14 09:44:14.501195] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:30.246 [2024-07-14 09:44:14.510285] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:30.246 [2024-07-14 09:44:14.510820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.246 [2024-07-14 09:44:14.510854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:30.246 [2024-07-14 09:44:14.510881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:30.246 [2024-07-14 09:44:14.511127] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:30.246 [2024-07-14 09:44:14.511344] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:30.246 [2024-07-14 09:44:14.511363] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:30.246 [2024-07-14 09:44:14.511376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:30.246 [2024-07-14 09:44:14.514436] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:30.246 [2024-07-14 09:44:14.523635] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:30.246 [2024-07-14 09:44:14.524119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.246 [2024-07-14 09:44:14.524160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:30.246 [2024-07-14 09:44:14.524177] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:30.246 [2024-07-14 09:44:14.524416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:30.246 [2024-07-14 09:44:14.524615] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:30.246 [2024-07-14 09:44:14.524634] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:30.246 [2024-07-14 09:44:14.524647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:30.246 [2024-07-14 09:44:14.527712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:30.246 [2024-07-14 09:44:14.536969] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:30.246 [2024-07-14 09:44:14.537493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.246 [2024-07-14 09:44:14.537526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:30.246 [2024-07-14 09:44:14.537543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:30.246 [2024-07-14 09:44:14.537780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:30.246 [2024-07-14 09:44:14.538034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:30.246 [2024-07-14 09:44:14.538056] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:30.246 [2024-07-14 09:44:14.538072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:30.246 [2024-07-14 09:44:14.541220] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:30.246 [2024-07-14 09:44:14.550489] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:30.246 [2024-07-14 09:44:14.551070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.246 [2024-07-14 09:44:14.551107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:30.246 [2024-07-14 09:44:14.551141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:30.246 [2024-07-14 09:44:14.551392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:30.246 [2024-07-14 09:44:14.551600] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:30.246 [2024-07-14 09:44:14.551621] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:30.246 [2024-07-14 09:44:14.551636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:30.246 [2024-07-14 09:44:14.554793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:30.246 [2024-07-14 09:44:14.564031] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:30.246 [2024-07-14 09:44:14.564480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.246 [2024-07-14 09:44:14.564507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:30.246 [2024-07-14 09:44:14.564523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:30.246 [2024-07-14 09:44:14.564732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:30.246 [2024-07-14 09:44:14.564980] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:30.246 [2024-07-14 09:44:14.565001] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:30.246 [2024-07-14 09:44:14.565015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:30.246 [2024-07-14 09:44:14.568201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:30.246 [2024-07-14 09:44:14.577461] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:30.246 [2024-07-14 09:44:14.577930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.246 [2024-07-14 09:44:14.577959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:30.246 [2024-07-14 09:44:14.577976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:30.246 [2024-07-14 09:44:14.578234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:30.246 [2024-07-14 09:44:14.578432] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:30.246 [2024-07-14 09:44:14.578452] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:30.246 [2024-07-14 09:44:14.578464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:30.246 [2024-07-14 09:44:14.580402] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:30.246 [2024-07-14 09:44:14.580452] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:30.246 [2024-07-14 09:44:14.580466] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:30.246 [2024-07-14 09:44:14.580478] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:30.246 [2024-07-14 09:44:14.580488] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:30.246 [2024-07-14 09:44:14.580537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:30.246 [2024-07-14 09:44:14.580770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:34:30.246 [2024-07-14 09:44:14.580772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:30.246 [2024-07-14 09:44:14.581657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:30.246 [2024-07-14 09:44:14.591071] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:30.246 [2024-07-14 09:44:14.591687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.246 [2024-07-14 09:44:14.591725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:30.246 [2024-07-14 09:44:14.591744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:30.246 [2024-07-14 09:44:14.591977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:30.246 [2024-07-14 09:44:14.592215] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:30.246 [2024-07-14 09:44:14.592236] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:30.246 [2024-07-14 09:44:14.592252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:30.246 [2024-07-14 09:44:14.595446] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:30.246 [2024-07-14 09:44:14.604638] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:30.246 [2024-07-14 09:44:14.605259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.246 [2024-07-14 09:44:14.605297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:30.246 [2024-07-14 09:44:14.605316] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:30.246 [2024-07-14 09:44:14.605555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:30.246 [2024-07-14 09:44:14.605769] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:30.246 [2024-07-14 09:44:14.605790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:30.246 [2024-07-14 09:44:14.605806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:30.246 [2024-07-14 09:44:14.609052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:30.246 [2024-07-14 09:44:14.618259] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:30.246 [2024-07-14 09:44:14.618840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.246 [2024-07-14 09:44:14.618884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:30.246 [2024-07-14 09:44:14.618916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:30.246 [2024-07-14 09:44:14.619141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:30.246 [2024-07-14 09:44:14.619371] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:30.246 [2024-07-14 09:44:14.619391] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:30.246 [2024-07-14 09:44:14.619407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:30.246 [2024-07-14 09:44:14.622672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:30.246 [2024-07-14 09:44:14.631946] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:30.246 [2024-07-14 09:44:14.632514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.246 [2024-07-14 09:44:14.632551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:30.246 [2024-07-14 09:44:14.632570] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:30.246 [2024-07-14 09:44:14.632806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:30.246 [2024-07-14 09:44:14.633036] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:30.246 [2024-07-14 09:44:14.633058] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:30.246 [2024-07-14 09:44:14.633073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:30.246 [2024-07-14 09:44:14.636341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:30.246 [2024-07-14 09:44:14.645604] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:30.246 [2024-07-14 09:44:14.646197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.246 [2024-07-14 09:44:14.646235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:30.246 [2024-07-14 09:44:14.646254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:30.246 [2024-07-14 09:44:14.646479] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:30.246 [2024-07-14 09:44:14.646701] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:30.246 [2024-07-14 09:44:14.646722] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:30.246 [2024-07-14 09:44:14.646738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:30.246 [2024-07-14 09:44:14.650012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:30.246 [2024-07-14 09:44:14.659347] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:30.246 [2024-07-14 09:44:14.659960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.246 [2024-07-14 09:44:14.659997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:30.246 [2024-07-14 09:44:14.660016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:30.246 [2024-07-14 09:44:14.660239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:30.246 [2024-07-14 09:44:14.660459] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:30.246 [2024-07-14 09:44:14.660481] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:30.246 [2024-07-14 09:44:14.660497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:30.246 [2024-07-14 09:44:14.663755] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:30.246 [2024-07-14 09:44:14.672925] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:30.246 [2024-07-14 09:44:14.673395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.246 [2024-07-14 09:44:14.673422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:30.246 [2024-07-14 09:44:14.673439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:30.246 [2024-07-14 09:44:14.673654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:30.246 [2024-07-14 09:44:14.673908] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:30.246 [2024-07-14 09:44:14.673930] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:30.246 [2024-07-14 09:44:14.673951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:30.246 [2024-07-14 09:44:14.677170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:30.246 [2024-07-14 09:44:14.686777] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:30.246 [2024-07-14 09:44:14.687223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.246 [2024-07-14 09:44:14.687251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:30.246 [2024-07-14 09:44:14.687271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:30.246 [2024-07-14 09:44:14.687487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:30.246 [2024-07-14 09:44:14.687705] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:30.246 [2024-07-14 09:44:14.687726] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:30.246 [2024-07-14 09:44:14.687740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:30.246 [2024-07-14 09:44:14.691001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:30.505 09:44:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:30.505 09:44:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:34:30.505 09:44:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:30.505 09:44:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:30.505 09:44:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:30.505 [2024-07-14 09:44:14.700599] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:30.505 [2024-07-14 09:44:14.701051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.505 [2024-07-14 09:44:14.701090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:30.505 [2024-07-14 09:44:14.701119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:30.505 [2024-07-14 09:44:14.701354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:30.505 [2024-07-14 09:44:14.701574] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:30.505 [2024-07-14 09:44:14.701596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:30.505 [2024-07-14 09:44:14.701610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:30.505 [2024-07-14 09:44:14.704936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:30.505 [2024-07-14 09:44:14.714176] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:30.505 [2024-07-14 09:44:14.714620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.505 [2024-07-14 09:44:14.714648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:30.505 [2024-07-14 09:44:14.714665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:30.505 [2024-07-14 09:44:14.714888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:30.505 [2024-07-14 09:44:14.715114] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:30.505 [2024-07-14 09:44:14.715137] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:30.505 [2024-07-14 09:44:14.715166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:30.505 [2024-07-14 09:44:14.718456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:30.505 09:44:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:30.505 09:44:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:30.505 09:44:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:30.505 09:44:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:30.505 [2024-07-14 09:44:14.726572] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:30.505 [2024-07-14 09:44:14.727731] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:30.505 [2024-07-14 09:44:14.728197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.505 [2024-07-14 09:44:14.728229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:30.505 [2024-07-14 09:44:14.728245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:30.505 [2024-07-14 09:44:14.728460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:30.505 [2024-07-14 09:44:14.728686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:30.505 [2024-07-14 09:44:14.728706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:30.505 [2024-07-14 09:44:14.728719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:30.505 [2024-07-14 09:44:14.731902] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:30.505 09:44:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:30.505 09:44:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:30.505 09:44:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:30.505 09:44:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:30.505 [2024-07-14 09:44:14.741401] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:30.505 [2024-07-14 09:44:14.741855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.505 [2024-07-14 09:44:14.741890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:30.505 [2024-07-14 09:44:14.741906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:30.505 [2024-07-14 09:44:14.742130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:30.505 [2024-07-14 09:44:14.742367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:30.505 [2024-07-14 09:44:14.742387] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:30.505 [2024-07-14 09:44:14.742400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:30.505 [2024-07-14 09:44:14.745525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:30.505 [2024-07-14 09:44:14.755007] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:30.505 [2024-07-14 09:44:14.755469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.505 [2024-07-14 09:44:14.755497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:30.505 [2024-07-14 09:44:14.755513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:30.505 [2024-07-14 09:44:14.755729] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:30.505 [2024-07-14 09:44:14.756000] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:30.505 [2024-07-14 09:44:14.756040] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:30.505 [2024-07-14 09:44:14.756054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:30.505 [2024-07-14 09:44:14.759363] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:30.505 [2024-07-14 09:44:14.768518] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:30.505 [2024-07-14 09:44:14.769120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.505 [2024-07-14 09:44:14.769161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:30.505 [2024-07-14 09:44:14.769180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:30.505 [2024-07-14 09:44:14.769405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:30.505 [2024-07-14 09:44:14.769626] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:30.505 [2024-07-14 09:44:14.769648] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:30.505 [2024-07-14 09:44:14.769664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:30.505 [2024-07-14 09:44:14.772811] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:30.505 Malloc0 00:34:30.505 09:44:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:30.505 09:44:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:30.505 09:44:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:30.505 09:44:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:30.505 [2024-07-14 09:44:14.782092] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:30.505 09:44:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:30.505 [2024-07-14 09:44:14.782517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.505 [2024-07-14 09:44:14.782544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f6f70 with addr=10.0.0.2, port=4420 00:34:30.505 [2024-07-14 09:44:14.782560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6f70 is same with the state(5) to be set 00:34:30.505 09:44:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:30.505 09:44:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:30.505 [2024-07-14 09:44:14.782775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f6f70 (9): Bad file descriptor 00:34:30.505 09:44:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:30.505 [2024-07-14 09:44:14.783003] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:30.505 [2024-07-14 09:44:14.783024] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:30.505 [2024-07-14 09:44:14.783038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:30.505 [2024-07-14 09:44:14.786342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:30.505 09:44:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:30.505 09:44:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:30.505 09:44:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:30.505 09:44:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:30.505 [2024-07-14 09:44:14.794446] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:30.505 [2024-07-14 09:44:14.795688] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:30.505 09:44:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:30.505 09:44:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 894349 00:34:30.505 [2024-07-14 09:44:14.952330] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:40.470 00:34:40.470 Latency(us) 00:34:40.470 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:40.470 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:40.470 Verification LBA range: start 0x0 length 0x4000 00:34:40.470 Nvme1n1 : 15.02 6659.98 26.02 8877.60 0.00 8213.63 1104.40 23592.96 00:34:40.470 =================================================================================================================== 00:34:40.470 Total : 6659.98 26.02 8877.60 0.00 8213.63 1104.40 23592.96 00:34:40.470 09:44:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:34:40.471 09:44:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:40.471 09:44:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:40.471 09:44:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:40.471 09:44:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:40.471 09:44:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:34:40.471 09:44:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:34:40.471 09:44:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:40.471 09:44:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:34:40.471 09:44:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:40.471 09:44:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:34:40.471 09:44:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:40.471 09:44:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:40.471 rmmod nvme_tcp 00:34:40.471 rmmod nvme_fabrics 00:34:40.471 rmmod nvme_keyring 00:34:40.471 09:44:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:40.471 09:44:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:34:40.471 09:44:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:34:40.471 09:44:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 895017 ']' 00:34:40.471 09:44:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 895017 00:34:40.471 09:44:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 895017 ']' 00:34:40.471 09:44:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 895017 00:34:40.471 09:44:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:34:40.471 09:44:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:40.471 09:44:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 895017 00:34:40.471 09:44:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:40.471 09:44:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:40.471 09:44:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 895017' 00:34:40.471 killing process with pid 895017 00:34:40.471 09:44:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 895017 00:34:40.471 09:44:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 895017 00:34:40.471 09:44:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:40.471 09:44:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:40.471 09:44:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:40.471 09:44:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:40.471 09:44:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:40.471 09:44:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:40.471 09:44:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:40.471 09:44:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:42.369 09:44:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:42.369 00:34:42.369 real 0m22.380s 00:34:42.369 user 0m59.537s 00:34:42.369 sys 0m4.418s 00:34:42.369 09:44:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:42.369 09:44:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:42.369 ************************************ 00:34:42.369 END TEST nvmf_bdevperf 00:34:42.369 ************************************ 00:34:42.369 09:44:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:42.369 09:44:26 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:42.369 09:44:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:42.369 09:44:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:42.369 09:44:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:42.369 ************************************ 00:34:42.369 START TEST nvmf_target_disconnect 00:34:42.369 ************************************ 00:34:42.369 09:44:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:42.369 * Looking for test storage... 00:34:42.369 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:42.369 09:44:26 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:42.369 09:44:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:34:42.369 09:44:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:42.369 09:44:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:42.369 09:44:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:42.369 09:44:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:42.369 09:44:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:42.369 09:44:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:42.369 09:44:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:42.369 09:44:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:42.369 09:44:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:42.369 09:44:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:42.369 09:44:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:42.369 09:44:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:42.369 09:44:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:42.369 09:44:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:42.369 09:44:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:42.369 09:44:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:42.369 09:44:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:42.369 09:44:26 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:42.369 09:44:26 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:42.369 09:44:26 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:42.370 09:44:26 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:42.370 09:44:26 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:42.370 09:44:26 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:42.370 09:44:26 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:34:42.370 09:44:26 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:42.370 09:44:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:34:42.370 09:44:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:42.370 09:44:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:42.370 09:44:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:42.370 09:44:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:42.370 09:44:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:42.370 09:44:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:42.370 09:44:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:42.370 09:44:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:42.370 09:44:26 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:34:42.370 09:44:26 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:34:42.370 09:44:26 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:34:42.370 09:44:26 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:34:42.370 09:44:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:42.370 09:44:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:42.370 09:44:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:42.370 09:44:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:42.370 09:44:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:42.370 09:44:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:42.370 09:44:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:42.370 09:44:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:42.370 09:44:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:42.370 09:44:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:42.370 09:44:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:34:42.370 09:44:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:44.267 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:44.267 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:44.267 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:44.267 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:44.267 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:44.268 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:44.268 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:44.268 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:44.268 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:44.268 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:44.268 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:44.268 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:44.268 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:44.268 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:44.268 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:44.268 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:44.268 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:44.268 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:44.268 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:44.268 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:34:44.268 00:34:44.268 --- 10.0.0.2 ping statistics --- 00:34:44.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:44.268 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:34:44.268 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:44.525 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:44.525 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:34:44.525 00:34:44.525 --- 10.0.0.1 ping statistics --- 00:34:44.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:44.525 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:44.525 ************************************ 00:34:44.525 START TEST nvmf_target_disconnect_tc1 00:34:44.525 ************************************ 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:44.525 EAL: No free 2048 kB hugepages reported on node 1 00:34:44.525 [2024-07-14 09:44:28.864464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.525 [2024-07-14 09:44:28.864533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87f590 with addr=10.0.0.2, port=4420 00:34:44.525 [2024-07-14 09:44:28.864572] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:44.525 [2024-07-14 09:44:28.864594] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:44.525 [2024-07-14 09:44:28.864610] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:34:44.525 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:34:44.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:34:44.525 Initializing NVMe Controllers 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:44.525 00:34:44.525 real 0m0.111s 00:34:44.525 user 0m0.036s 00:34:44.525 sys 0m0.067s 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:44.525 ************************************ 00:34:44.525 END TEST nvmf_target_disconnect_tc1 00:34:44.525 ************************************ 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:44.525 ************************************ 00:34:44.525 START TEST nvmf_target_disconnect_tc2 00:34:44.525 ************************************ 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=898166 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 898166 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 898166 ']' 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:44.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:44.525 09:44:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:44.784 [2024-07-14 09:44:28.983959] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:34:44.784 [2024-07-14 09:44:28.984030] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:44.784 EAL: No free 2048 kB hugepages reported on node 1 00:34:44.784 [2024-07-14 09:44:29.046240] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:44.784 [2024-07-14 09:44:29.136694] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:44.784 [2024-07-14 09:44:29.136745] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:44.784 [2024-07-14 09:44:29.136775] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:44.784 [2024-07-14 09:44:29.136787] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:44.784 [2024-07-14 09:44:29.136796] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:44.784 [2024-07-14 09:44:29.136884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:34:44.784 [2024-07-14 09:44:29.137156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:34:44.784 [2024-07-14 09:44:29.137206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:34:44.784 [2024-07-14 09:44:29.137209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:34:45.041 09:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:45.041 09:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:34:45.041 09:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:45.041 09:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:45.041 09:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:45.041 09:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:45.041 09:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:45.041 09:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.041 09:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:45.041 Malloc0 00:34:45.041 09:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.041 09:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:45.041 09:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.041 09:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:45.041 [2024-07-14 09:44:29.315450] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:45.041 09:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.041 09:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:45.041 09:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.041 09:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:45.041 09:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.041 09:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:45.041 09:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.041 09:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:45.041 09:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.041 09:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:45.041 09:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.041 09:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:45.041 [2024-07-14 09:44:29.343692] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:45.041 09:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.041 09:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:45.041 09:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.041 09:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:45.041 09:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.041 09:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=898189 00:34:45.041 09:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:34:45.041 09:44:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:45.041 EAL: No free 2048 kB hugepages reported on node 1 00:34:46.948 09:44:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 898166 00:34:46.948 09:44:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:34:46.948 Read completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 Read completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 Read completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 Read completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 Read completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 Read completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 Read completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 Read completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 Read completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 Read completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 Write completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 Write completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 Read completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 Read completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 Read completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 Read completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 Write completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 Write completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 Write completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 Read completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 Read completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 Write completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 Read completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 Write completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 Write completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 Write completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 Write completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 Read completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 Write completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 Write completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 Write completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 Write completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 Read completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 Read completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 [2024-07-14 09:44:31.368884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:46.948 Read completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 Read completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 Write completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 Read completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 Write completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 Read completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 Read completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 Write completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 Read completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 Read completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 Write completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 Read completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 Read completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 Write completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 Write completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 Read completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 Write completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.948 Write completed with error (sct=0, sc=8) 00:34:46.948 starting I/O failed 00:34:46.949 Write completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Write completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Write completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Write completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Write completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Write completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Write completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Write completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Write completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 [2024-07-14 09:44:31.369232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Write completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Write completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Write completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Write completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Write completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Write completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Write completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Write completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 [2024-07-14 09:44:31.369567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Write completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Write completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Write completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Write completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Write completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Write completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Read completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Write completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 Write completed with error (sct=0, sc=8) 00:34:46.949 starting I/O failed 00:34:46.949 [2024-07-14 09:44:31.369827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:46.949 [2024-07-14 09:44:31.370090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.949 [2024-07-14 09:44:31.370123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:46.949 qpair failed and we were unable to recover it. 00:34:46.949 [2024-07-14 09:44:31.370341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.949 [2024-07-14 09:44:31.370380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.949 qpair failed and we were unable to recover it. 00:34:46.949 [2024-07-14 09:44:31.370588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.949 [2024-07-14 09:44:31.370631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.949 qpair failed and we were unable to recover it. 00:34:46.949 [2024-07-14 09:44:31.370947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.949 [2024-07-14 09:44:31.370975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.949 qpair failed and we were unable to recover it. 00:34:46.949 [2024-07-14 09:44:31.371173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.949 [2024-07-14 09:44:31.371198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.949 qpair failed and we were unable to recover it. 00:34:46.949 [2024-07-14 09:44:31.371387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.949 [2024-07-14 09:44:31.371412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.949 qpair failed and we were unable to recover it. 00:34:46.949 [2024-07-14 09:44:31.371581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.949 [2024-07-14 09:44:31.371606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.949 qpair failed and we were unable to recover it. 00:34:46.949 [2024-07-14 09:44:31.371794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.949 [2024-07-14 09:44:31.371819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.949 qpair failed and we were unable to recover it. 00:34:46.949 [2024-07-14 09:44:31.372002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.949 [2024-07-14 09:44:31.372028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.949 qpair failed and we were unable to recover it. 00:34:46.949 [2024-07-14 09:44:31.372201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.949 [2024-07-14 09:44:31.372226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.949 qpair failed and we were unable to recover it. 00:34:46.949 [2024-07-14 09:44:31.372501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.949 [2024-07-14 09:44:31.372560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.949 qpair failed and we were unable to recover it. 00:34:46.949 [2024-07-14 09:44:31.372871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.949 [2024-07-14 09:44:31.372914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.949 qpair failed and we were unable to recover it. 00:34:46.949 [2024-07-14 09:44:31.373091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.949 [2024-07-14 09:44:31.373116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.950 qpair failed and we were unable to recover it. 00:34:46.950 [2024-07-14 09:44:31.373332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.950 [2024-07-14 09:44:31.373360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.950 qpair failed and we were unable to recover it. 00:34:46.950 [2024-07-14 09:44:31.373777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.950 [2024-07-14 09:44:31.373826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.950 qpair failed and we were unable to recover it. 00:34:46.950 [2024-07-14 09:44:31.374052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.950 [2024-07-14 09:44:31.374078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.950 qpair failed and we were unable to recover it. 00:34:46.950 [2024-07-14 09:44:31.374309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.950 [2024-07-14 09:44:31.374334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.950 qpair failed and we were unable to recover it. 00:34:46.950 [2024-07-14 09:44:31.374591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.950 [2024-07-14 09:44:31.374644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.950 qpair failed and we were unable to recover it. 00:34:46.950 [2024-07-14 09:44:31.374878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.950 [2024-07-14 09:44:31.374921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.950 qpair failed and we were unable to recover it. 00:34:46.950 [2024-07-14 09:44:31.375120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.950 [2024-07-14 09:44:31.375145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.950 qpair failed and we were unable to recover it. 00:34:46.950 [2024-07-14 09:44:31.375311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.950 [2024-07-14 09:44:31.375336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.950 qpair failed and we were unable to recover it. 00:34:46.950 [2024-07-14 09:44:31.375550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.950 [2024-07-14 09:44:31.375578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.950 qpair failed and we were unable to recover it. 00:34:46.950 [2024-07-14 09:44:31.375825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.950 [2024-07-14 09:44:31.375877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.950 qpair failed and we were unable to recover it. 00:34:46.950 [2024-07-14 09:44:31.376054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.950 [2024-07-14 09:44:31.376079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.950 qpair failed and we were unable to recover it. 00:34:46.950 [2024-07-14 09:44:31.376276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.950 [2024-07-14 09:44:31.376301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.950 qpair failed and we were unable to recover it. 00:34:46.950 [2024-07-14 09:44:31.376615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.950 [2024-07-14 09:44:31.376670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.950 qpair failed and we were unable to recover it. 00:34:46.950 [2024-07-14 09:44:31.376871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.950 [2024-07-14 09:44:31.376897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.950 qpair failed and we were unable to recover it. 00:34:46.950 [2024-07-14 09:44:31.377073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.950 [2024-07-14 09:44:31.377099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.950 qpair failed and we were unable to recover it. 00:34:46.950 [2024-07-14 09:44:31.377292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.950 [2024-07-14 09:44:31.377318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.950 qpair failed and we were unable to recover it. 00:34:46.950 [2024-07-14 09:44:31.377534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.950 [2024-07-14 09:44:31.377559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.950 qpair failed and we were unable to recover it. 00:34:46.950 [2024-07-14 09:44:31.377748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.950 [2024-07-14 09:44:31.377773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.950 qpair failed and we were unable to recover it. 00:34:46.950 [2024-07-14 09:44:31.378002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.950 [2024-07-14 09:44:31.378028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.950 qpair failed and we were unable to recover it. 00:34:46.950 [2024-07-14 09:44:31.378200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.950 [2024-07-14 09:44:31.378225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.950 qpair failed and we were unable to recover it. 00:34:46.950 [2024-07-14 09:44:31.378454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.950 [2024-07-14 09:44:31.378479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.950 qpair failed and we were unable to recover it. 00:34:46.950 [2024-07-14 09:44:31.378673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.950 [2024-07-14 09:44:31.378698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.950 qpair failed and we were unable to recover it. 00:34:46.950 [2024-07-14 09:44:31.378889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.950 [2024-07-14 09:44:31.378915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.950 qpair failed and we were unable to recover it. 00:34:46.950 [2024-07-14 09:44:31.379087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.950 [2024-07-14 09:44:31.379112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.950 qpair failed and we were unable to recover it. 00:34:46.950 [2024-07-14 09:44:31.379301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.950 [2024-07-14 09:44:31.379327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.950 qpair failed and we were unable to recover it. 00:34:46.950 [2024-07-14 09:44:31.379488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.950 [2024-07-14 09:44:31.379514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.950 qpair failed and we were unable to recover it. 00:34:46.950 [2024-07-14 09:44:31.379711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.950 [2024-07-14 09:44:31.379735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.950 qpair failed and we were unable to recover it. 00:34:46.950 [2024-07-14 09:44:31.379924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.950 [2024-07-14 09:44:31.379950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.950 qpair failed and we were unable to recover it. 00:34:46.950 [2024-07-14 09:44:31.380108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.950 [2024-07-14 09:44:31.380134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.950 qpair failed and we were unable to recover it. 00:34:46.950 [2024-07-14 09:44:31.380347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.950 [2024-07-14 09:44:31.380372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.950 qpair failed and we were unable to recover it. 00:34:46.950 [2024-07-14 09:44:31.380587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.950 [2024-07-14 09:44:31.380612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.950 qpair failed and we were unable to recover it. 00:34:46.950 [2024-07-14 09:44:31.380799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.950 [2024-07-14 09:44:31.380824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.950 qpair failed and we were unable to recover it. 00:34:46.950 [2024-07-14 09:44:31.380995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.950 [2024-07-14 09:44:31.381021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.950 qpair failed and we were unable to recover it. 00:34:46.950 [2024-07-14 09:44:31.381189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.950 [2024-07-14 09:44:31.381215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.950 qpair failed and we were unable to recover it. 00:34:46.950 [2024-07-14 09:44:31.381374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.950 [2024-07-14 09:44:31.381398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.950 qpair failed and we were unable to recover it. 00:34:46.950 [2024-07-14 09:44:31.381593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.950 [2024-07-14 09:44:31.381620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.950 qpair failed and we were unable to recover it. 00:34:46.950 [2024-07-14 09:44:31.381895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.950 [2024-07-14 09:44:31.381922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.950 qpair failed and we were unable to recover it. 00:34:46.950 [2024-07-14 09:44:31.382084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.950 [2024-07-14 09:44:31.382108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.950 qpair failed and we were unable to recover it. 00:34:46.950 [2024-07-14 09:44:31.382310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.950 [2024-07-14 09:44:31.382338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.950 qpair failed and we were unable to recover it. 00:34:46.950 [2024-07-14 09:44:31.382578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.950 [2024-07-14 09:44:31.382603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.950 qpair failed and we were unable to recover it. 00:34:46.950 [2024-07-14 09:44:31.382814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.950 [2024-07-14 09:44:31.382842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.950 qpair failed and we were unable to recover it. 00:34:46.950 [2024-07-14 09:44:31.383086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.950 [2024-07-14 09:44:31.383125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:46.950 qpair failed and we were unable to recover it. 00:34:46.950 [2024-07-14 09:44:31.383517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.950 [2024-07-14 09:44:31.383570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:46.951 qpair failed and we were unable to recover it. 00:34:46.951 [2024-07-14 09:44:31.383787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.951 [2024-07-14 09:44:31.383814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:46.951 qpair failed and we were unable to recover it. 00:34:46.951 [2024-07-14 09:44:31.384005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.951 [2024-07-14 09:44:31.384033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:46.951 qpair failed and we were unable to recover it. 00:34:46.951 [2024-07-14 09:44:31.384207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.951 [2024-07-14 09:44:31.384234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:46.951 qpair failed and we were unable to recover it. 00:34:46.951 [2024-07-14 09:44:31.384450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.951 [2024-07-14 09:44:31.384477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:46.951 qpair failed and we were unable to recover it. 00:34:46.951 [2024-07-14 09:44:31.384680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.951 [2024-07-14 09:44:31.384706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:46.951 qpair failed and we were unable to recover it. 00:34:46.951 [2024-07-14 09:44:31.384931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.951 [2024-07-14 09:44:31.384957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:46.951 qpair failed and we were unable to recover it. 00:34:46.951 [2024-07-14 09:44:31.385148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.951 [2024-07-14 09:44:31.385174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:46.951 qpair failed and we were unable to recover it. 00:34:46.951 [2024-07-14 09:44:31.385359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.951 [2024-07-14 09:44:31.385386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:46.951 qpair failed and we were unable to recover it. 00:34:46.951 [2024-07-14 09:44:31.385575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.951 [2024-07-14 09:44:31.385602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:46.951 qpair failed and we were unable to recover it. 00:34:46.951 [2024-07-14 09:44:31.385791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.951 [2024-07-14 09:44:31.385818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:46.951 qpair failed and we were unable to recover it. 00:34:46.951 [2024-07-14 09:44:31.386074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.951 [2024-07-14 09:44:31.386113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.951 qpair failed and we were unable to recover it. 00:34:46.951 [2024-07-14 09:44:31.386309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.951 [2024-07-14 09:44:31.386336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.951 qpair failed and we were unable to recover it. 00:34:46.951 [2024-07-14 09:44:31.386530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.951 [2024-07-14 09:44:31.386556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.951 qpair failed and we were unable to recover it. 00:34:46.951 [2024-07-14 09:44:31.386791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.951 [2024-07-14 09:44:31.386851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.951 qpair failed and we were unable to recover it. 00:34:46.951 [2024-07-14 09:44:31.387052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.951 [2024-07-14 09:44:31.387078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.951 qpair failed and we were unable to recover it. 00:34:46.951 [2024-07-14 09:44:31.387267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.951 [2024-07-14 09:44:31.387292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.951 qpair failed and we were unable to recover it. 00:34:46.951 [2024-07-14 09:44:31.387476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.951 [2024-07-14 09:44:31.387502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.951 qpair failed and we were unable to recover it. 00:34:46.951 [2024-07-14 09:44:31.387664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.951 [2024-07-14 09:44:31.387707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.951 qpair failed and we were unable to recover it. 00:34:46.951 [2024-07-14 09:44:31.387947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.951 [2024-07-14 09:44:31.387973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.951 qpair failed and we were unable to recover it. 00:34:46.951 [2024-07-14 09:44:31.388138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.951 [2024-07-14 09:44:31.388163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.951 qpair failed and we were unable to recover it. 00:34:46.951 [2024-07-14 09:44:31.388351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.951 [2024-07-14 09:44:31.388377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.951 qpair failed and we were unable to recover it. 00:34:46.951 [2024-07-14 09:44:31.388544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.951 [2024-07-14 09:44:31.388570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.951 qpair failed and we were unable to recover it. 00:34:46.951 [2024-07-14 09:44:31.388760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.951 [2024-07-14 09:44:31.388785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.951 qpair failed and we were unable to recover it. 00:34:46.951 [2024-07-14 09:44:31.388973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.951 [2024-07-14 09:44:31.389005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.951 qpair failed and we were unable to recover it. 00:34:46.951 [2024-07-14 09:44:31.389202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.951 [2024-07-14 09:44:31.389227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.951 qpair failed and we were unable to recover it. 00:34:46.951 [2024-07-14 09:44:31.389419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.951 [2024-07-14 09:44:31.389444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.951 qpair failed and we were unable to recover it. 00:34:46.951 [2024-07-14 09:44:31.389642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.951 [2024-07-14 09:44:31.389667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.951 qpair failed and we were unable to recover it. 00:34:46.951 [2024-07-14 09:44:31.389825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.951 [2024-07-14 09:44:31.389850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.951 qpair failed and we were unable to recover it. 00:34:46.951 [2024-07-14 09:44:31.390044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.951 [2024-07-14 09:44:31.390070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.951 qpair failed and we were unable to recover it. 00:34:46.951 [2024-07-14 09:44:31.390258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.951 [2024-07-14 09:44:31.390283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.951 qpair failed and we were unable to recover it. 00:34:46.951 [2024-07-14 09:44:31.390550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.951 [2024-07-14 09:44:31.390575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.951 qpair failed and we were unable to recover it. 00:34:46.951 [2024-07-14 09:44:31.390823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.951 [2024-07-14 09:44:31.390848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.951 qpair failed and we were unable to recover it. 00:34:46.951 [2024-07-14 09:44:31.391032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.951 [2024-07-14 09:44:31.391057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.951 qpair failed and we were unable to recover it. 00:34:46.951 [2024-07-14 09:44:31.391306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.951 [2024-07-14 09:44:31.391352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.951 qpair failed and we were unable to recover it. 00:34:46.951 [2024-07-14 09:44:31.391602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.951 [2024-07-14 09:44:31.391627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.951 qpair failed and we were unable to recover it. 00:34:46.951 [2024-07-14 09:44:31.391818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.951 [2024-07-14 09:44:31.391844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.951 qpair failed and we were unable to recover it. 00:34:46.951 [2024-07-14 09:44:31.392022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.951 [2024-07-14 09:44:31.392048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:46.951 qpair failed and we were unable to recover it. 00:34:46.951 [2024-07-14 09:44:31.392283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.951 [2024-07-14 09:44:31.392322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:46.951 qpair failed and we were unable to recover it. 00:34:46.951 [2024-07-14 09:44:31.392524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.951 [2024-07-14 09:44:31.392552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:46.951 qpair failed and we were unable to recover it. 00:34:46.951 [2024-07-14 09:44:31.392721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.951 [2024-07-14 09:44:31.392748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:46.951 qpair failed and we were unable to recover it. 00:34:46.951 [2024-07-14 09:44:31.392942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.951 [2024-07-14 09:44:31.392969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:46.951 qpair failed and we were unable to recover it. 00:34:46.951 [2024-07-14 09:44:31.393132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.951 [2024-07-14 09:44:31.393158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:46.952 qpair failed and we were unable to recover it. 00:34:46.952 [2024-07-14 09:44:31.393349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.952 [2024-07-14 09:44:31.393375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:46.952 qpair failed and we were unable to recover it. 00:34:46.952 [2024-07-14 09:44:31.393561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.952 [2024-07-14 09:44:31.393590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:46.952 qpair failed and we were unable to recover it. 00:34:46.952 [2024-07-14 09:44:31.393803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.952 [2024-07-14 09:44:31.393829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:46.952 qpair failed and we were unable to recover it. 00:34:46.952 [2024-07-14 09:44:31.394039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.952 [2024-07-14 09:44:31.394066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:46.952 qpair failed and we were unable to recover it. 00:34:46.952 [2024-07-14 09:44:31.394286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.952 [2024-07-14 09:44:31.394312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:46.952 qpair failed and we were unable to recover it. 00:34:46.952 [2024-07-14 09:44:31.394503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.952 [2024-07-14 09:44:31.394529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:46.952 qpair failed and we were unable to recover it. 00:34:46.952 [2024-07-14 09:44:31.394722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.952 [2024-07-14 09:44:31.394748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:46.952 qpair failed and we were unable to recover it. 00:34:46.952 [2024-07-14 09:44:31.394921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.952 [2024-07-14 09:44:31.394948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:46.952 qpair failed and we were unable to recover it. 00:34:46.952 [2024-07-14 09:44:31.395201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.952 [2024-07-14 09:44:31.395233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:46.952 qpair failed and we were unable to recover it. 00:34:46.952 [2024-07-14 09:44:31.395461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.952 [2024-07-14 09:44:31.395487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:46.952 qpair failed and we were unable to recover it. 00:34:46.952 [2024-07-14 09:44:31.395672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.952 [2024-07-14 09:44:31.395699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:46.952 qpair failed and we were unable to recover it. 00:34:46.952 [2024-07-14 09:44:31.395889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.952 [2024-07-14 09:44:31.395930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:46.952 qpair failed and we were unable to recover it. 00:34:46.952 [2024-07-14 09:44:31.396126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.952 [2024-07-14 09:44:31.396169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:46.952 qpair failed and we were unable to recover it. 00:34:46.952 [2024-07-14 09:44:31.396414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.952 [2024-07-14 09:44:31.396440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:46.952 qpair failed and we were unable to recover it. 00:34:46.952 [2024-07-14 09:44:31.396629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.952 [2024-07-14 09:44:31.396656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:46.952 qpair failed and we were unable to recover it. 00:34:46.952 [2024-07-14 09:44:31.396850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.952 [2024-07-14 09:44:31.396884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:46.952 qpair failed and we were unable to recover it. 00:34:46.952 [2024-07-14 09:44:31.397054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.952 [2024-07-14 09:44:31.397080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:46.952 qpair failed and we were unable to recover it. 00:34:46.952 [2024-07-14 09:44:31.397270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.952 [2024-07-14 09:44:31.397297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:46.952 qpair failed and we were unable to recover it. 00:34:46.952 [2024-07-14 09:44:31.397468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.952 [2024-07-14 09:44:31.397494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:46.952 qpair failed and we were unable to recover it. 00:34:46.952 [2024-07-14 09:44:31.397711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.952 [2024-07-14 09:44:31.397753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:46.952 qpair failed and we were unable to recover it. 00:34:46.952 [2024-07-14 09:44:31.397991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.952 [2024-07-14 09:44:31.398017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:46.952 qpair failed and we were unable to recover it. 00:34:46.952 [2024-07-14 09:44:31.398208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.952 [2024-07-14 09:44:31.398234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:46.952 qpair failed and we were unable to recover it. 00:34:46.952 [2024-07-14 09:44:31.398433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:46.952 [2024-07-14 09:44:31.398463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:46.952 qpair failed and we were unable to recover it. 00:34:46.952 [2024-07-14 09:44:31.398687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.227 [2024-07-14 09:44:31.398713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.227 qpair failed and we were unable to recover it. 00:34:47.227 [2024-07-14 09:44:31.398902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.227 [2024-07-14 09:44:31.398932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.227 qpair failed and we were unable to recover it. 00:34:47.227 [2024-07-14 09:44:31.399148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.227 [2024-07-14 09:44:31.399176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.227 qpair failed and we were unable to recover it. 00:34:47.227 [2024-07-14 09:44:31.399376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.227 [2024-07-14 09:44:31.399402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.227 qpair failed and we were unable to recover it. 00:34:47.227 [2024-07-14 09:44:31.399598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.227 [2024-07-14 09:44:31.399624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.227 qpair failed and we were unable to recover it. 00:34:47.227 [2024-07-14 09:44:31.399808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.227 [2024-07-14 09:44:31.399834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.227 qpair failed and we were unable to recover it. 00:34:47.227 [2024-07-14 09:44:31.400039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.227 [2024-07-14 09:44:31.400067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.227 qpair failed and we were unable to recover it. 00:34:47.227 [2024-07-14 09:44:31.400289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.227 [2024-07-14 09:44:31.400318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.227 qpair failed and we were unable to recover it. 00:34:47.227 [2024-07-14 09:44:31.400676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.227 [2024-07-14 09:44:31.400737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.227 qpair failed and we were unable to recover it. 00:34:47.227 [2024-07-14 09:44:31.400953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.227 [2024-07-14 09:44:31.400980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.227 qpair failed and we were unable to recover it. 00:34:47.227 [2024-07-14 09:44:31.401152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.227 [2024-07-14 09:44:31.401178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.227 qpair failed and we were unable to recover it. 00:34:47.227 [2024-07-14 09:44:31.401371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.227 [2024-07-14 09:44:31.401397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.227 qpair failed and we were unable to recover it. 00:34:47.227 [2024-07-14 09:44:31.401596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.227 [2024-07-14 09:44:31.401623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.227 qpair failed and we were unable to recover it. 00:34:47.227 [2024-07-14 09:44:31.401814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.227 [2024-07-14 09:44:31.401840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.227 qpair failed and we were unable to recover it. 00:34:47.227 [2024-07-14 09:44:31.402044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.227 [2024-07-14 09:44:31.402072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.227 qpair failed and we were unable to recover it. 00:34:47.227 [2024-07-14 09:44:31.402239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.227 [2024-07-14 09:44:31.402266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.227 qpair failed and we were unable to recover it. 00:34:47.227 [2024-07-14 09:44:31.402481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.227 [2024-07-14 09:44:31.402508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.227 qpair failed and we were unable to recover it. 00:34:47.227 [2024-07-14 09:44:31.402694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.227 [2024-07-14 09:44:31.402721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.227 qpair failed and we were unable to recover it. 00:34:47.227 [2024-07-14 09:44:31.402924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.227 [2024-07-14 09:44:31.402951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.227 qpair failed and we were unable to recover it. 00:34:47.227 [2024-07-14 09:44:31.403111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.227 [2024-07-14 09:44:31.403138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.227 qpair failed and we were unable to recover it. 00:34:47.227 [2024-07-14 09:44:31.403301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.227 [2024-07-14 09:44:31.403344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.227 qpair failed and we were unable to recover it. 00:34:47.227 [2024-07-14 09:44:31.403590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.227 [2024-07-14 09:44:31.403616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.227 qpair failed and we were unable to recover it. 00:34:47.227 [2024-07-14 09:44:31.403840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.227 [2024-07-14 09:44:31.403886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.227 qpair failed and we were unable to recover it. 00:34:47.227 [2024-07-14 09:44:31.404138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.227 [2024-07-14 09:44:31.404165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.227 qpair failed and we were unable to recover it. 00:34:47.227 [2024-07-14 09:44:31.404351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.227 [2024-07-14 09:44:31.404377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.227 qpair failed and we were unable to recover it. 00:34:47.227 [2024-07-14 09:44:31.404582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.227 [2024-07-14 09:44:31.404617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.227 qpair failed and we were unable to recover it. 00:34:47.227 [2024-07-14 09:44:31.404824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.227 [2024-07-14 09:44:31.404854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.227 qpair failed and we were unable to recover it. 00:34:47.227 [2024-07-14 09:44:31.405081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.227 [2024-07-14 09:44:31.405108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.227 qpair failed and we were unable to recover it. 00:34:47.227 [2024-07-14 09:44:31.405296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.227 [2024-07-14 09:44:31.405323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.227 qpair failed and we were unable to recover it. 00:34:47.227 [2024-07-14 09:44:31.405533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.227 [2024-07-14 09:44:31.405559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.228 qpair failed and we were unable to recover it. 00:34:47.228 [2024-07-14 09:44:31.405746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.228 [2024-07-14 09:44:31.405773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.228 qpair failed and we were unable to recover it. 00:34:47.228 [2024-07-14 09:44:31.405952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.228 [2024-07-14 09:44:31.405978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.228 qpair failed and we were unable to recover it. 00:34:47.228 [2024-07-14 09:44:31.406145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.228 [2024-07-14 09:44:31.406171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.228 qpair failed and we were unable to recover it. 00:34:47.228 [2024-07-14 09:44:31.406337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.228 [2024-07-14 09:44:31.406363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.228 qpair failed and we were unable to recover it. 00:34:47.228 [2024-07-14 09:44:31.406556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.228 [2024-07-14 09:44:31.406583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.228 qpair failed and we were unable to recover it. 00:34:47.228 [2024-07-14 09:44:31.406816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.228 [2024-07-14 09:44:31.406845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.228 qpair failed and we were unable to recover it. 00:34:47.228 [2024-07-14 09:44:31.407094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.228 [2024-07-14 09:44:31.407120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.228 qpair failed and we were unable to recover it. 00:34:47.228 [2024-07-14 09:44:31.407323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.228 [2024-07-14 09:44:31.407349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.228 qpair failed and we were unable to recover it. 00:34:47.228 [2024-07-14 09:44:31.407600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.228 [2024-07-14 09:44:31.407626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.228 qpair failed and we were unable to recover it. 00:34:47.228 [2024-07-14 09:44:31.407844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.228 [2024-07-14 09:44:31.407896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.228 qpair failed and we were unable to recover it. 00:34:47.228 [2024-07-14 09:44:31.408112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.228 [2024-07-14 09:44:31.408138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.228 qpair failed and we were unable to recover it. 00:34:47.228 [2024-07-14 09:44:31.408334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.228 [2024-07-14 09:44:31.408360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.228 qpair failed and we were unable to recover it. 00:34:47.228 [2024-07-14 09:44:31.408522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.228 [2024-07-14 09:44:31.408549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.228 qpair failed and we were unable to recover it. 00:34:47.228 [2024-07-14 09:44:31.408776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.228 [2024-07-14 09:44:31.408805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.228 qpair failed and we were unable to recover it. 00:34:47.228 [2024-07-14 09:44:31.409030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.228 [2024-07-14 09:44:31.409058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.228 qpair failed and we were unable to recover it. 00:34:47.228 [2024-07-14 09:44:31.409229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.228 [2024-07-14 09:44:31.409256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.228 qpair failed and we were unable to recover it. 00:34:47.228 [2024-07-14 09:44:31.409501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.228 [2024-07-14 09:44:31.409531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.228 qpair failed and we were unable to recover it. 00:34:47.228 [2024-07-14 09:44:31.409745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.228 [2024-07-14 09:44:31.409771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.228 qpair failed and we were unable to recover it. 00:34:47.228 [2024-07-14 09:44:31.409985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.228 [2024-07-14 09:44:31.410012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.228 qpair failed and we were unable to recover it. 00:34:47.228 [2024-07-14 09:44:31.410203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.228 [2024-07-14 09:44:31.410229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.228 qpair failed and we were unable to recover it. 00:34:47.228 [2024-07-14 09:44:31.410461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.228 [2024-07-14 09:44:31.410487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.228 qpair failed and we were unable to recover it. 00:34:47.228 [2024-07-14 09:44:31.410679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.228 [2024-07-14 09:44:31.410705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.228 qpair failed and we were unable to recover it. 00:34:47.228 [2024-07-14 09:44:31.410886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.228 [2024-07-14 09:44:31.410933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.228 qpair failed and we were unable to recover it. 00:34:47.228 [2024-07-14 09:44:31.411162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.228 [2024-07-14 09:44:31.411190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.228 qpair failed and we were unable to recover it. 00:34:47.228 [2024-07-14 09:44:31.411353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.228 [2024-07-14 09:44:31.411381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.228 qpair failed and we were unable to recover it. 00:34:47.228 [2024-07-14 09:44:31.411588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.228 [2024-07-14 09:44:31.411617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.228 qpair failed and we were unable to recover it. 00:34:47.228 [2024-07-14 09:44:31.411803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.228 [2024-07-14 09:44:31.411830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.228 qpair failed and we were unable to recover it. 00:34:47.228 [2024-07-14 09:44:31.412007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.228 [2024-07-14 09:44:31.412034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.228 qpair failed and we were unable to recover it. 00:34:47.228 [2024-07-14 09:44:31.412196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.228 [2024-07-14 09:44:31.412222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.228 qpair failed and we were unable to recover it. 00:34:47.228 [2024-07-14 09:44:31.412458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.228 [2024-07-14 09:44:31.412486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.228 qpair failed and we were unable to recover it. 00:34:47.228 [2024-07-14 09:44:31.412702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.228 [2024-07-14 09:44:31.412727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.228 qpair failed and we were unable to recover it. 00:34:47.228 [2024-07-14 09:44:31.412925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.228 [2024-07-14 09:44:31.412951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.228 qpair failed and we were unable to recover it. 00:34:47.228 [2024-07-14 09:44:31.413134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.228 [2024-07-14 09:44:31.413162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.228 qpair failed and we were unable to recover it. 00:34:47.228 [2024-07-14 09:44:31.413379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.228 [2024-07-14 09:44:31.413404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.228 qpair failed and we were unable to recover it. 00:34:47.228 [2024-07-14 09:44:31.413575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.228 [2024-07-14 09:44:31.413601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.228 qpair failed and we were unable to recover it. 00:34:47.228 [2024-07-14 09:44:31.413788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.228 [2024-07-14 09:44:31.413814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.228 qpair failed and we were unable to recover it. 00:34:47.228 [2024-07-14 09:44:31.414026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.228 [2024-07-14 09:44:31.414052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.228 qpair failed and we were unable to recover it. 00:34:47.228 [2024-07-14 09:44:31.414234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.228 [2024-07-14 09:44:31.414263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.228 qpair failed and we were unable to recover it. 00:34:47.228 [2024-07-14 09:44:31.414511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.228 [2024-07-14 09:44:31.414537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.228 qpair failed and we were unable to recover it. 00:34:47.228 [2024-07-14 09:44:31.414731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.228 [2024-07-14 09:44:31.414757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.228 qpair failed and we were unable to recover it. 00:34:47.228 [2024-07-14 09:44:31.414968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.228 [2024-07-14 09:44:31.414995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.228 qpair failed and we were unable to recover it. 00:34:47.228 [2024-07-14 09:44:31.415184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.228 [2024-07-14 09:44:31.415209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.228 qpair failed and we were unable to recover it. 00:34:47.228 [2024-07-14 09:44:31.415375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.228 [2024-07-14 09:44:31.415402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.228 qpair failed and we were unable to recover it. 00:34:47.228 [2024-07-14 09:44:31.415598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.228 [2024-07-14 09:44:31.415624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.228 qpair failed and we were unable to recover it. 00:34:47.228 [2024-07-14 09:44:31.415814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.228 [2024-07-14 09:44:31.415840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.228 qpair failed and we were unable to recover it. 00:34:47.228 [2024-07-14 09:44:31.416052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.228 [2024-07-14 09:44:31.416079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.228 qpair failed and we were unable to recover it. 00:34:47.228 [2024-07-14 09:44:31.416241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.228 [2024-07-14 09:44:31.416268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.228 qpair failed and we were unable to recover it. 00:34:47.228 [2024-07-14 09:44:31.416479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.228 [2024-07-14 09:44:31.416509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.228 qpair failed and we were unable to recover it. 00:34:47.228 [2024-07-14 09:44:31.416723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.228 [2024-07-14 09:44:31.416749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.228 qpair failed and we were unable to recover it. 00:34:47.228 [2024-07-14 09:44:31.416957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.416984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.229 qpair failed and we were unable to recover it. 00:34:47.229 [2024-07-14 09:44:31.417177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.417202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.229 qpair failed and we were unable to recover it. 00:34:47.229 [2024-07-14 09:44:31.417370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.417397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.229 qpair failed and we were unable to recover it. 00:34:47.229 [2024-07-14 09:44:31.417563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.417589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.229 qpair failed and we were unable to recover it. 00:34:47.229 [2024-07-14 09:44:31.417833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.417862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.229 qpair failed and we were unable to recover it. 00:34:47.229 [2024-07-14 09:44:31.418089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.418114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.229 qpair failed and we were unable to recover it. 00:34:47.229 [2024-07-14 09:44:31.418277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.418304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.229 qpair failed and we were unable to recover it. 00:34:47.229 [2024-07-14 09:44:31.418466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.418492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.229 qpair failed and we were unable to recover it. 00:34:47.229 [2024-07-14 09:44:31.418681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.418706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.229 qpair failed and we were unable to recover it. 00:34:47.229 [2024-07-14 09:44:31.418902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.418928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.229 qpair failed and we were unable to recover it. 00:34:47.229 [2024-07-14 09:44:31.419139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.419165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.229 qpair failed and we were unable to recover it. 00:34:47.229 [2024-07-14 09:44:31.419379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.419405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.229 qpair failed and we were unable to recover it. 00:34:47.229 [2024-07-14 09:44:31.419602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.419628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.229 qpair failed and we were unable to recover it. 00:34:47.229 [2024-07-14 09:44:31.419818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.419852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.229 qpair failed and we were unable to recover it. 00:34:47.229 [2024-07-14 09:44:31.420053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.420079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.229 qpair failed and we were unable to recover it. 00:34:47.229 [2024-07-14 09:44:31.420366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.420428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.229 qpair failed and we were unable to recover it. 00:34:47.229 [2024-07-14 09:44:31.420640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.420666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.229 qpair failed and we were unable to recover it. 00:34:47.229 [2024-07-14 09:44:31.420835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.420861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.229 qpair failed and we were unable to recover it. 00:34:47.229 [2024-07-14 09:44:31.421036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.421062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.229 qpair failed and we were unable to recover it. 00:34:47.229 [2024-07-14 09:44:31.421250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.421276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.229 qpair failed and we were unable to recover it. 00:34:47.229 [2024-07-14 09:44:31.421507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.421533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.229 qpair failed and we were unable to recover it. 00:34:47.229 [2024-07-14 09:44:31.421718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.421744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.229 qpair failed and we were unable to recover it. 00:34:47.229 [2024-07-14 09:44:31.421941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.421968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.229 qpair failed and we were unable to recover it. 00:34:47.229 [2024-07-14 09:44:31.422135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.422161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.229 qpair failed and we were unable to recover it. 00:34:47.229 [2024-07-14 09:44:31.422450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.422501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.229 qpair failed and we were unable to recover it. 00:34:47.229 [2024-07-14 09:44:31.422744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.422769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.229 qpair failed and we were unable to recover it. 00:34:47.229 [2024-07-14 09:44:31.422962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.422988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.229 qpair failed and we were unable to recover it. 00:34:47.229 [2024-07-14 09:44:31.423208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.423234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.229 qpair failed and we were unable to recover it. 00:34:47.229 [2024-07-14 09:44:31.423394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.423419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.229 qpair failed and we were unable to recover it. 00:34:47.229 [2024-07-14 09:44:31.423615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.423641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.229 qpair failed and we were unable to recover it. 00:34:47.229 [2024-07-14 09:44:31.423857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.423888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.229 qpair failed and we were unable to recover it. 00:34:47.229 [2024-07-14 09:44:31.424086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.424112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.229 qpair failed and we were unable to recover it. 00:34:47.229 [2024-07-14 09:44:31.424312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.424337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.229 qpair failed and we were unable to recover it. 00:34:47.229 [2024-07-14 09:44:31.424498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.424524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.229 qpair failed and we were unable to recover it. 00:34:47.229 [2024-07-14 09:44:31.424710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.424735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.229 qpair failed and we were unable to recover it. 00:34:47.229 [2024-07-14 09:44:31.424922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.424948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.229 qpair failed and we were unable to recover it. 00:34:47.229 [2024-07-14 09:44:31.425152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.425181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.229 qpair failed and we were unable to recover it. 00:34:47.229 [2024-07-14 09:44:31.425385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.425413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.229 qpair failed and we were unable to recover it. 00:34:47.229 [2024-07-14 09:44:31.425621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.425647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.229 qpair failed and we were unable to recover it. 00:34:47.229 [2024-07-14 09:44:31.425831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.425857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.229 qpair failed and we were unable to recover it. 00:34:47.229 [2024-07-14 09:44:31.426067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.426093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.229 qpair failed and we were unable to recover it. 00:34:47.229 [2024-07-14 09:44:31.426310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.426336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.229 qpair failed and we were unable to recover it. 00:34:47.229 [2024-07-14 09:44:31.426517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.426545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.229 qpair failed and we were unable to recover it. 00:34:47.229 [2024-07-14 09:44:31.426758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.426787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.229 qpair failed and we were unable to recover it. 00:34:47.229 [2024-07-14 09:44:31.427021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.427048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.229 qpair failed and we were unable to recover it. 00:34:47.229 [2024-07-14 09:44:31.427241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.427266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.229 qpair failed and we were unable to recover it. 00:34:47.229 [2024-07-14 09:44:31.427420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.427446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.229 qpair failed and we were unable to recover it. 00:34:47.229 [2024-07-14 09:44:31.427637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.427663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.229 qpair failed and we were unable to recover it. 00:34:47.229 [2024-07-14 09:44:31.427855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.427886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.229 qpair failed and we were unable to recover it. 00:34:47.229 [2024-07-14 09:44:31.428101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.428127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.229 qpair failed and we were unable to recover it. 00:34:47.229 [2024-07-14 09:44:31.428319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.229 [2024-07-14 09:44:31.428346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.230 qpair failed and we were unable to recover it. 00:34:47.230 [2024-07-14 09:44:31.428509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.230 [2024-07-14 09:44:31.428536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.230 qpair failed and we were unable to recover it. 00:34:47.230 [2024-07-14 09:44:31.428729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.230 [2024-07-14 09:44:31.428755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.230 qpair failed and we were unable to recover it. 00:34:47.230 [2024-07-14 09:44:31.428919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.230 [2024-07-14 09:44:31.428950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.230 qpair failed and we were unable to recover it. 00:34:47.230 [2024-07-14 09:44:31.429119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.230 [2024-07-14 09:44:31.429145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.230 qpair failed and we were unable to recover it. 00:34:47.230 [2024-07-14 09:44:31.429382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.230 [2024-07-14 09:44:31.429410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.230 qpair failed and we were unable to recover it. 00:34:47.230 [2024-07-14 09:44:31.429596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.230 [2024-07-14 09:44:31.429622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.230 qpair failed and we were unable to recover it. 00:34:47.230 [2024-07-14 09:44:31.429809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.230 [2024-07-14 09:44:31.429835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.230 qpair failed and we were unable to recover it. 00:34:47.230 [2024-07-14 09:44:31.430033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.230 [2024-07-14 09:44:31.430059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.230 qpair failed and we were unable to recover it. 00:34:47.230 [2024-07-14 09:44:31.430250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.230 [2024-07-14 09:44:31.430277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.230 qpair failed and we were unable to recover it. 00:34:47.230 [2024-07-14 09:44:31.430477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.230 [2024-07-14 09:44:31.430503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.230 qpair failed and we were unable to recover it. 00:34:47.230 [2024-07-14 09:44:31.430680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.230 [2024-07-14 09:44:31.430710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.230 qpair failed and we were unable to recover it. 00:34:47.230 [2024-07-14 09:44:31.430922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.230 [2024-07-14 09:44:31.430948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.230 qpair failed and we were unable to recover it. 00:34:47.230 [2024-07-14 09:44:31.431116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.230 [2024-07-14 09:44:31.431142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.230 qpair failed and we were unable to recover it. 00:34:47.230 [2024-07-14 09:44:31.431356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.230 [2024-07-14 09:44:31.431382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.230 qpair failed and we were unable to recover it. 00:34:47.230 [2024-07-14 09:44:31.431594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.230 [2024-07-14 09:44:31.431619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.230 qpair failed and we were unable to recover it. 00:34:47.230 [2024-07-14 09:44:31.431777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.230 [2024-07-14 09:44:31.431804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.230 qpair failed and we were unable to recover it. 00:34:47.230 [2024-07-14 09:44:31.432019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.230 [2024-07-14 09:44:31.432046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.230 qpair failed and we were unable to recover it. 00:34:47.230 [2024-07-14 09:44:31.432216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.230 [2024-07-14 09:44:31.432242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.230 qpair failed and we were unable to recover it. 00:34:47.230 [2024-07-14 09:44:31.432435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.230 [2024-07-14 09:44:31.432460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.230 qpair failed and we were unable to recover it. 00:34:47.230 [2024-07-14 09:44:31.432685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.230 [2024-07-14 09:44:31.432711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.230 qpair failed and we were unable to recover it. 00:34:47.230 [2024-07-14 09:44:31.432926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.230 [2024-07-14 09:44:31.432952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.230 qpair failed and we were unable to recover it. 00:34:47.230 [2024-07-14 09:44:31.433117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.230 [2024-07-14 09:44:31.433144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.230 qpair failed and we were unable to recover it. 00:34:47.230 [2024-07-14 09:44:31.433382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.230 [2024-07-14 09:44:31.433411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.230 qpair failed and we were unable to recover it. 00:34:47.230 [2024-07-14 09:44:31.433620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.230 [2024-07-14 09:44:31.433646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.230 qpair failed and we were unable to recover it. 00:34:47.230 [2024-07-14 09:44:31.433837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.230 [2024-07-14 09:44:31.433862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.230 qpair failed and we were unable to recover it. 00:34:47.230 [2024-07-14 09:44:31.434042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.230 [2024-07-14 09:44:31.434068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.230 qpair failed and we were unable to recover it. 00:34:47.230 [2024-07-14 09:44:31.434258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.230 [2024-07-14 09:44:31.434283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.230 qpair failed and we were unable to recover it. 00:34:47.230 [2024-07-14 09:44:31.434469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.230 [2024-07-14 09:44:31.434494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.230 qpair failed and we were unable to recover it. 00:34:47.230 [2024-07-14 09:44:31.434670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.230 [2024-07-14 09:44:31.434699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.230 qpair failed and we were unable to recover it. 00:34:47.230 [2024-07-14 09:44:31.434919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.230 [2024-07-14 09:44:31.434945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.230 qpair failed and we were unable to recover it. 00:34:47.230 [2024-07-14 09:44:31.435161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.230 [2024-07-14 09:44:31.435186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.230 qpair failed and we were unable to recover it. 00:34:47.230 [2024-07-14 09:44:31.435390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.230 [2024-07-14 09:44:31.435416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.230 qpair failed and we were unable to recover it. 00:34:47.230 [2024-07-14 09:44:31.435633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.230 [2024-07-14 09:44:31.435659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.230 qpair failed and we were unable to recover it. 00:34:47.230 [2024-07-14 09:44:31.435892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.230 [2024-07-14 09:44:31.435918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.230 qpair failed and we were unable to recover it. 00:34:47.230 [2024-07-14 09:44:31.436109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.230 [2024-07-14 09:44:31.436134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.230 qpair failed and we were unable to recover it. 00:34:47.230 [2024-07-14 09:44:31.436323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.230 [2024-07-14 09:44:31.436348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.230 qpair failed and we were unable to recover it. 00:34:47.230 [2024-07-14 09:44:31.436570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.230 [2024-07-14 09:44:31.436612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.230 qpair failed and we were unable to recover it. 00:34:47.231 [2024-07-14 09:44:31.436827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.231 [2024-07-14 09:44:31.436853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.231 qpair failed and we were unable to recover it. 00:34:47.231 [2024-07-14 09:44:31.437058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.231 [2024-07-14 09:44:31.437084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.231 qpair failed and we were unable to recover it. 00:34:47.231 [2024-07-14 09:44:31.437274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.231 [2024-07-14 09:44:31.437300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.231 qpair failed and we were unable to recover it. 00:34:47.231 [2024-07-14 09:44:31.437484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.231 [2024-07-14 09:44:31.437512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.231 qpair failed and we were unable to recover it. 00:34:47.231 [2024-07-14 09:44:31.437752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.231 [2024-07-14 09:44:31.437778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.231 qpair failed and we were unable to recover it. 00:34:47.231 [2024-07-14 09:44:31.437974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.231 [2024-07-14 09:44:31.438005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.231 qpair failed and we were unable to recover it. 00:34:47.231 [2024-07-14 09:44:31.438216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.231 [2024-07-14 09:44:31.438242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.231 qpair failed and we were unable to recover it. 00:34:47.231 [2024-07-14 09:44:31.438431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.231 [2024-07-14 09:44:31.438457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.231 qpair failed and we were unable to recover it. 00:34:47.231 [2024-07-14 09:44:31.438651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.231 [2024-07-14 09:44:31.438677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.231 qpair failed and we were unable to recover it. 00:34:47.231 [2024-07-14 09:44:31.438873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.231 [2024-07-14 09:44:31.438900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.231 qpair failed and we were unable to recover it. 00:34:47.231 [2024-07-14 09:44:31.439093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.231 [2024-07-14 09:44:31.439118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.231 qpair failed and we were unable to recover it. 00:34:47.231 [2024-07-14 09:44:31.439277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.231 [2024-07-14 09:44:31.439303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.231 qpair failed and we were unable to recover it. 00:34:47.231 [2024-07-14 09:44:31.439517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.231 [2024-07-14 09:44:31.439543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.231 qpair failed and we were unable to recover it. 00:34:47.231 [2024-07-14 09:44:31.439749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.231 [2024-07-14 09:44:31.439774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.231 qpair failed and we were unable to recover it. 00:34:47.231 [2024-07-14 09:44:31.439969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.231 [2024-07-14 09:44:31.439996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.231 qpair failed and we were unable to recover it. 00:34:47.231 [2024-07-14 09:44:31.440187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.231 [2024-07-14 09:44:31.440212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.231 qpair failed and we were unable to recover it. 00:34:47.231 [2024-07-14 09:44:31.440398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.231 [2024-07-14 09:44:31.440424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.231 qpair failed and we were unable to recover it. 00:34:47.231 [2024-07-14 09:44:31.440631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.231 [2024-07-14 09:44:31.440657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.231 qpair failed and we were unable to recover it. 00:34:47.231 [2024-07-14 09:44:31.440843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.231 [2024-07-14 09:44:31.440875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.231 qpair failed and we were unable to recover it. 00:34:47.231 [2024-07-14 09:44:31.441094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.231 [2024-07-14 09:44:31.441120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.231 qpair failed and we were unable to recover it. 00:34:47.231 [2024-07-14 09:44:31.441331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.231 [2024-07-14 09:44:31.441357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.231 qpair failed and we were unable to recover it. 00:34:47.231 [2024-07-14 09:44:31.441570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.231 [2024-07-14 09:44:31.441596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.231 qpair failed and we were unable to recover it. 00:34:47.231 [2024-07-14 09:44:31.441785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.231 [2024-07-14 09:44:31.441810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.231 qpair failed and we were unable to recover it. 00:34:47.231 [2024-07-14 09:44:31.441970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.231 [2024-07-14 09:44:31.441996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.231 qpair failed and we were unable to recover it. 00:34:47.231 [2024-07-14 09:44:31.442220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.231 [2024-07-14 09:44:31.442249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.231 qpair failed and we were unable to recover it. 00:34:47.231 [2024-07-14 09:44:31.442455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.231 [2024-07-14 09:44:31.442481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.231 qpair failed and we were unable to recover it. 00:34:47.231 [2024-07-14 09:44:31.442697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.231 [2024-07-14 09:44:31.442723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.231 qpair failed and we were unable to recover it. 00:34:47.231 [2024-07-14 09:44:31.442923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.231 [2024-07-14 09:44:31.442949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.231 qpair failed and we were unable to recover it. 00:34:47.231 [2024-07-14 09:44:31.443161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.231 [2024-07-14 09:44:31.443187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.231 qpair failed and we were unable to recover it. 00:34:47.231 [2024-07-14 09:44:31.443372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.231 [2024-07-14 09:44:31.443397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.231 qpair failed and we were unable to recover it. 00:34:47.231 [2024-07-14 09:44:31.443582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.231 [2024-07-14 09:44:31.443608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.231 qpair failed and we were unable to recover it. 00:34:47.231 [2024-07-14 09:44:31.443802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.231 [2024-07-14 09:44:31.443828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.231 qpair failed and we were unable to recover it. 00:34:47.231 [2024-07-14 09:44:31.444022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.231 [2024-07-14 09:44:31.444048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.231 qpair failed and we were unable to recover it. 00:34:47.231 [2024-07-14 09:44:31.444259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.231 [2024-07-14 09:44:31.444287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.231 qpair failed and we were unable to recover it. 00:34:47.231 [2024-07-14 09:44:31.444497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.231 [2024-07-14 09:44:31.444523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.231 qpair failed and we were unable to recover it. 00:34:47.231 [2024-07-14 09:44:31.444746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.231 [2024-07-14 09:44:31.444771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.231 qpair failed and we were unable to recover it. 00:34:47.231 [2024-07-14 09:44:31.444961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.231 [2024-07-14 09:44:31.444987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.231 qpair failed and we were unable to recover it. 00:34:47.231 [2024-07-14 09:44:31.445148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.231 [2024-07-14 09:44:31.445174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.231 qpair failed and we were unable to recover it. 00:34:47.231 [2024-07-14 09:44:31.445399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.231 [2024-07-14 09:44:31.445425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.231 qpair failed and we were unable to recover it. 00:34:47.231 [2024-07-14 09:44:31.445628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.231 [2024-07-14 09:44:31.445654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.231 qpair failed and we were unable to recover it. 00:34:47.231 [2024-07-14 09:44:31.445843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.231 [2024-07-14 09:44:31.445874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.231 qpair failed and we were unable to recover it. 00:34:47.231 [2024-07-14 09:44:31.446062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.231 [2024-07-14 09:44:31.446088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.231 qpair failed and we were unable to recover it. 00:34:47.231 [2024-07-14 09:44:31.446265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.231 [2024-07-14 09:44:31.446294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.231 qpair failed and we were unable to recover it. 00:34:47.231 [2024-07-14 09:44:31.446497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.231 [2024-07-14 09:44:31.446522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.231 qpair failed and we were unable to recover it. 00:34:47.231 [2024-07-14 09:44:31.446702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.231 [2024-07-14 09:44:31.446728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.231 qpair failed and we were unable to recover it. 00:34:47.231 [2024-07-14 09:44:31.446920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.231 [2024-07-14 09:44:31.446951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.231 qpair failed and we were unable to recover it. 00:34:47.231 [2024-07-14 09:44:31.447140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.231 [2024-07-14 09:44:31.447166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.231 qpair failed and we were unable to recover it. 00:34:47.231 [2024-07-14 09:44:31.447356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.231 [2024-07-14 09:44:31.447383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.231 qpair failed and we were unable to recover it. 00:34:47.231 [2024-07-14 09:44:31.447590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.231 [2024-07-14 09:44:31.447620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.231 qpair failed and we were unable to recover it. 00:34:47.231 [2024-07-14 09:44:31.447826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.231 [2024-07-14 09:44:31.447851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.231 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.448042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.232 [2024-07-14 09:44:31.448068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.232 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.448269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.232 [2024-07-14 09:44:31.448294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.232 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.448486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.232 [2024-07-14 09:44:31.448513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.232 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.448714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.232 [2024-07-14 09:44:31.448740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.232 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.448954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.232 [2024-07-14 09:44:31.448981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.232 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.449144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.232 [2024-07-14 09:44:31.449170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.232 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.449413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.232 [2024-07-14 09:44:31.449441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.232 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.449683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.232 [2024-07-14 09:44:31.449708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.232 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.449891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.232 [2024-07-14 09:44:31.449917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.232 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.450138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.232 [2024-07-14 09:44:31.450163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.232 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.450355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.232 [2024-07-14 09:44:31.450381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.232 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.450601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.232 [2024-07-14 09:44:31.450626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.232 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.450822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.232 [2024-07-14 09:44:31.450848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.232 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.451073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.232 [2024-07-14 09:44:31.451102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.232 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.451349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.232 [2024-07-14 09:44:31.451375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.232 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.451591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.232 [2024-07-14 09:44:31.451619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.232 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.451846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.232 [2024-07-14 09:44:31.451883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.232 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.452048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.232 [2024-07-14 09:44:31.452074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.232 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.452283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.232 [2024-07-14 09:44:31.452308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.232 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.452497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.232 [2024-07-14 09:44:31.452523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.232 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.452724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.232 [2024-07-14 09:44:31.452750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.232 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.452959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.232 [2024-07-14 09:44:31.452988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.232 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.453231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.232 [2024-07-14 09:44:31.453257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.232 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.453470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.232 [2024-07-14 09:44:31.453496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.232 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.453660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.232 [2024-07-14 09:44:31.453685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.232 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.453850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.232 [2024-07-14 09:44:31.453882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.232 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.454070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.232 [2024-07-14 09:44:31.454096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.232 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.454283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.232 [2024-07-14 09:44:31.454308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.232 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.454496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.232 [2024-07-14 09:44:31.454522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.232 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.454725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.232 [2024-07-14 09:44:31.454751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.232 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.454947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.232 [2024-07-14 09:44:31.454973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.232 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.455183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.232 [2024-07-14 09:44:31.455213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.232 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.455420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.232 [2024-07-14 09:44:31.455445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.232 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.455658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.232 [2024-07-14 09:44:31.455687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.232 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.455901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.232 [2024-07-14 09:44:31.455927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.232 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.456111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.232 [2024-07-14 09:44:31.456141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.232 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.456386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.232 [2024-07-14 09:44:31.456414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.232 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.456621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.232 [2024-07-14 09:44:31.456650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.232 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.456858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.232 [2024-07-14 09:44:31.456889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.232 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.457046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.232 [2024-07-14 09:44:31.457072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.232 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.457257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.232 [2024-07-14 09:44:31.457282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.232 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.457476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.232 [2024-07-14 09:44:31.457501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.232 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.457712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.232 [2024-07-14 09:44:31.457740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.232 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.457984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.232 [2024-07-14 09:44:31.458010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.232 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.458208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.232 [2024-07-14 09:44:31.458233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.232 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.458422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.232 [2024-07-14 09:44:31.458447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.232 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.458609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.232 [2024-07-14 09:44:31.458635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.232 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.458801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.232 [2024-07-14 09:44:31.458826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.232 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.459070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.232 [2024-07-14 09:44:31.459098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.232 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.459327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.232 [2024-07-14 09:44:31.459358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.232 qpair failed and we were unable to recover it. 00:34:47.232 [2024-07-14 09:44:31.459577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.233 [2024-07-14 09:44:31.459606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.233 qpair failed and we were unable to recover it. 00:34:47.233 [2024-07-14 09:44:31.459770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.233 [2024-07-14 09:44:31.459798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.233 qpair failed and we were unable to recover it. 00:34:47.233 [2024-07-14 09:44:31.459994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.233 [2024-07-14 09:44:31.460022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.233 qpair failed and we were unable to recover it. 00:34:47.233 [2024-07-14 09:44:31.460239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.233 [2024-07-14 09:44:31.460267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.233 qpair failed and we were unable to recover it. 00:34:47.233 [2024-07-14 09:44:31.460482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.233 [2024-07-14 09:44:31.460509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.233 qpair failed and we were unable to recover it. 00:34:47.233 [2024-07-14 09:44:31.460697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.233 [2024-07-14 09:44:31.460725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.233 qpair failed and we were unable to recover it. 00:34:47.233 [2024-07-14 09:44:31.460877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.233 [2024-07-14 09:44:31.460903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.233 qpair failed and we were unable to recover it. 00:34:47.233 [2024-07-14 09:44:31.461072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.233 [2024-07-14 09:44:31.461099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.233 qpair failed and we were unable to recover it. 00:34:47.233 [2024-07-14 09:44:31.461289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.233 [2024-07-14 09:44:31.461316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.233 qpair failed and we were unable to recover it. 00:34:47.233 [2024-07-14 09:44:31.461504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.233 [2024-07-14 09:44:31.461531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.233 qpair failed and we were unable to recover it. 00:34:47.233 [2024-07-14 09:44:31.461724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.233 [2024-07-14 09:44:31.461752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.233 qpair failed and we were unable to recover it. 00:34:47.233 [2024-07-14 09:44:31.461956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.233 [2024-07-14 09:44:31.461984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.233 qpair failed and we were unable to recover it. 00:34:47.233 [2024-07-14 09:44:31.462178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.233 [2024-07-14 09:44:31.462206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.233 qpair failed and we were unable to recover it. 00:34:47.233 [2024-07-14 09:44:31.462427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.233 [2024-07-14 09:44:31.462454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.233 qpair failed and we were unable to recover it. 00:34:47.233 [2024-07-14 09:44:31.462668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.233 [2024-07-14 09:44:31.462700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.233 qpair failed and we were unable to recover it. 00:34:47.233 [2024-07-14 09:44:31.462938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.233 [2024-07-14 09:44:31.462965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.233 qpair failed and we were unable to recover it. 00:34:47.233 [2024-07-14 09:44:31.463182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.233 [2024-07-14 09:44:31.463212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.233 qpair failed and we were unable to recover it. 00:34:47.233 [2024-07-14 09:44:31.463423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.233 [2024-07-14 09:44:31.463450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.233 qpair failed and we were unable to recover it. 00:34:47.233 [2024-07-14 09:44:31.463631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.233 [2024-07-14 09:44:31.463659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.233 qpair failed and we were unable to recover it. 00:34:47.233 [2024-07-14 09:44:31.463829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.233 [2024-07-14 09:44:31.463857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.233 qpair failed and we were unable to recover it. 00:34:47.233 [2024-07-14 09:44:31.464035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.233 [2024-07-14 09:44:31.464062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.233 qpair failed and we were unable to recover it. 00:34:47.233 [2024-07-14 09:44:31.464259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.233 [2024-07-14 09:44:31.464287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.233 qpair failed and we were unable to recover it. 00:34:47.233 [2024-07-14 09:44:31.464512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.233 [2024-07-14 09:44:31.464542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.233 qpair failed and we were unable to recover it. 00:34:47.233 [2024-07-14 09:44:31.464755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.233 [2024-07-14 09:44:31.464782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.233 qpair failed and we were unable to recover it. 00:34:47.233 [2024-07-14 09:44:31.464955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.233 [2024-07-14 09:44:31.464984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.233 qpair failed and we were unable to recover it. 00:34:47.233 [2024-07-14 09:44:31.465201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.233 [2024-07-14 09:44:31.465238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.233 qpair failed and we were unable to recover it. 00:34:47.233 [2024-07-14 09:44:31.465440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.233 [2024-07-14 09:44:31.465471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.233 qpair failed and we were unable to recover it. 00:34:47.233 [2024-07-14 09:44:31.465695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.233 [2024-07-14 09:44:31.465723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.233 qpair failed and we were unable to recover it. 00:34:47.233 [2024-07-14 09:44:31.465921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.233 [2024-07-14 09:44:31.465950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.233 qpair failed and we were unable to recover it. 00:34:47.233 [2024-07-14 09:44:31.466190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.233 [2024-07-14 09:44:31.466221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.233 qpair failed and we were unable to recover it. 00:34:47.233 [2024-07-14 09:44:31.466428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.233 [2024-07-14 09:44:31.466456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.233 qpair failed and we were unable to recover it. 00:34:47.233 [2024-07-14 09:44:31.466667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.233 [2024-07-14 09:44:31.466697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.233 qpair failed and we were unable to recover it. 00:34:47.233 [2024-07-14 09:44:31.466901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.233 [2024-07-14 09:44:31.466931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.233 qpair failed and we were unable to recover it. 00:34:47.233 [2024-07-14 09:44:31.467145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.233 [2024-07-14 09:44:31.467172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.233 qpair failed and we were unable to recover it. 00:34:47.233 [2024-07-14 09:44:31.467413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.233 [2024-07-14 09:44:31.467440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.233 qpair failed and we were unable to recover it. 00:34:47.233 [2024-07-14 09:44:31.467626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.233 [2024-07-14 09:44:31.467654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.233 qpair failed and we were unable to recover it. 00:34:47.233 [2024-07-14 09:44:31.467847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.233 [2024-07-14 09:44:31.467887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.233 qpair failed and we were unable to recover it. 00:34:47.233 [2024-07-14 09:44:31.468042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.233 [2024-07-14 09:44:31.468070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.233 qpair failed and we were unable to recover it. 00:34:47.233 [2024-07-14 09:44:31.468264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.233 [2024-07-14 09:44:31.468292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.233 qpair failed and we were unable to recover it. 00:34:47.233 [2024-07-14 09:44:31.468467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.233 [2024-07-14 09:44:31.468495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.233 qpair failed and we were unable to recover it. 00:34:47.233 [2024-07-14 09:44:31.468729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.233 [2024-07-14 09:44:31.468760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.233 qpair failed and we were unable to recover it. 00:34:47.233 [2024-07-14 09:44:31.468990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.469021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.469199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.469226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.469436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.469466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.469684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.469712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.469881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.469910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.470102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.470130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.470328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.470356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.470526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.470554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.470750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.470778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.470966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.470994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.471185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.471213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.471465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.471493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.471687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.471714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.471966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.471993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.472244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.472274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.472509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.472539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.472749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.472776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.473016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.473044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.473261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.473293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.473506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.473535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.473714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.473741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.473938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.473967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.474201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.474229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.474449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.474479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.474701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.474735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.474972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.475000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.475199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.475227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.475468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.475498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.475721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.475749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.475974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.476006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.476249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.476280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.476482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.476509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.476702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.476732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.476931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.476962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.477197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.477224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.477443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.477473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.477716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.477746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.477972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.478000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.478196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.478225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.478454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.478485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.478704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.478731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.478947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.478991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.479213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.479241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.479466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.479494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.479728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.479759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.479952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.479983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.480175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.480202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.480395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.480422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.480621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.480652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.480880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.480909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.481124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.234 [2024-07-14 09:44:31.481154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.234 qpair failed and we were unable to recover it. 00:34:47.234 [2024-07-14 09:44:31.481366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.481397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.235 [2024-07-14 09:44:31.481613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.481641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.235 [2024-07-14 09:44:31.481871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.481902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.235 [2024-07-14 09:44:31.482138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.482168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.235 [2024-07-14 09:44:31.482400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.482428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.235 [2024-07-14 09:44:31.482627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.482657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.235 [2024-07-14 09:44:31.482863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.482900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.235 [2024-07-14 09:44:31.483098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.483127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.235 [2024-07-14 09:44:31.483366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.483397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.235 [2024-07-14 09:44:31.483641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.483672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.235 [2024-07-14 09:44:31.483921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.483949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.235 [2024-07-14 09:44:31.484158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.484190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.235 [2024-07-14 09:44:31.484402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.484432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.235 [2024-07-14 09:44:31.484669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.484701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.235 [2024-07-14 09:44:31.484922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.484950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.235 [2024-07-14 09:44:31.485190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.485221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.235 [2024-07-14 09:44:31.485441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.485469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.235 [2024-07-14 09:44:31.485689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.485719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.235 [2024-07-14 09:44:31.485931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.485962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.235 [2024-07-14 09:44:31.486198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.486226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.235 [2024-07-14 09:44:31.486439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.486470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.235 [2024-07-14 09:44:31.486674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.486705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.235 [2024-07-14 09:44:31.486895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.486923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.235 [2024-07-14 09:44:31.487115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.487143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.235 [2024-07-14 09:44:31.487357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.487388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.235 [2024-07-14 09:44:31.487627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.487655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.235 [2024-07-14 09:44:31.487885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.487917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.235 [2024-07-14 09:44:31.488133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.488164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.235 [2024-07-14 09:44:31.488376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.488404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.235 [2024-07-14 09:44:31.488617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.488648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.235 [2024-07-14 09:44:31.488845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.488889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.235 [2024-07-14 09:44:31.489086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.489115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.235 [2024-07-14 09:44:31.489305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.489336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.235 [2024-07-14 09:44:31.489548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.489576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.235 [2024-07-14 09:44:31.489734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.489762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.235 [2024-07-14 09:44:31.490000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.490031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.235 [2024-07-14 09:44:31.490281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.490308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.235 [2024-07-14 09:44:31.490524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.490552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.235 [2024-07-14 09:44:31.490730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.490760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.235 [2024-07-14 09:44:31.490977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.491004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.235 [2024-07-14 09:44:31.491208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.491236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.235 [2024-07-14 09:44:31.491486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.491516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.235 [2024-07-14 09:44:31.491733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.491763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.235 [2024-07-14 09:44:31.491970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.491999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.235 [2024-07-14 09:44:31.492229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.492260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.235 [2024-07-14 09:44:31.492507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.492535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.235 [2024-07-14 09:44:31.492756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.492784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.235 [2024-07-14 09:44:31.492977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.493007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.235 [2024-07-14 09:44:31.493238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.493268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.235 [2024-07-14 09:44:31.493519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.493546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.235 [2024-07-14 09:44:31.493765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.235 [2024-07-14 09:44:31.493795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.235 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.494012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.494043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.494228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.494255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.494496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.494532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.494767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.494798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.495017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.495045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.495266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.495296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.495495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.495523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.495738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.495766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.495985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.496014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.496224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.496255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.496485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.496512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.496730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.496760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.496971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.497002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.497184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.497212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.497379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.497406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.497575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.497603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.497875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.497903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.498143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.498174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.498391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.498422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.498641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.498669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.498881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.498913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.499127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.499159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.499372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.499399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.499624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.499655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.499892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.499933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.500146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.500174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.500393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.500424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.500664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.500694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.500934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.500961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.501188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.501219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.501461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.501491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.501705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.501733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.501962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.501992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.502224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.502255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.502431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.502459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.502696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.502726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.502931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.502962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.503196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.503224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.503444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.503475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.503661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.503691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.503911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.503939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.504139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.504182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.504395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.504429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.504668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.504696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.504940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.504972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.505207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.505238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.505474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.505502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.505722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.505752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.505970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.506001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.506205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.506232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.506446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.506476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.236 qpair failed and we were unable to recover it. 00:34:47.236 [2024-07-14 09:44:31.506696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.236 [2024-07-14 09:44:31.506727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.237 qpair failed and we were unable to recover it. 00:34:47.237 [2024-07-14 09:44:31.506935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.237 [2024-07-14 09:44:31.506964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.237 qpair failed and we were unable to recover it. 00:34:47.237 [2024-07-14 09:44:31.507183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.237 [2024-07-14 09:44:31.507213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.237 qpair failed and we were unable to recover it. 00:34:47.237 [2024-07-14 09:44:31.507457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.237 [2024-07-14 09:44:31.507488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.237 qpair failed and we were unable to recover it. 00:34:47.237 [2024-07-14 09:44:31.507719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.237 [2024-07-14 09:44:31.507747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.237 qpair failed and we were unable to recover it. 00:34:47.237 [2024-07-14 09:44:31.507985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.237 [2024-07-14 09:44:31.508016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.237 qpair failed and we were unable to recover it. 00:34:47.237 [2024-07-14 09:44:31.508248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.237 [2024-07-14 09:44:31.508279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.237 qpair failed and we were unable to recover it. 00:34:47.237 [2024-07-14 09:44:31.508491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.237 [2024-07-14 09:44:31.508518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.237 qpair failed and we were unable to recover it. 00:34:47.237 [2024-07-14 09:44:31.508735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.237 [2024-07-14 09:44:31.508766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.237 qpair failed and we were unable to recover it. 00:34:47.237 [2024-07-14 09:44:31.508994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.237 [2024-07-14 09:44:31.509025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.237 qpair failed and we were unable to recover it. 00:34:47.237 [2024-07-14 09:44:31.509201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.237 [2024-07-14 09:44:31.509228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.237 qpair failed and we were unable to recover it. 00:34:47.237 [2024-07-14 09:44:31.509416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.237 [2024-07-14 09:44:31.509448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.237 qpair failed and we were unable to recover it. 00:34:47.237 [2024-07-14 09:44:31.509650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.237 [2024-07-14 09:44:31.509681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.237 qpair failed and we were unable to recover it. 00:34:47.237 [2024-07-14 09:44:31.509908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.237 [2024-07-14 09:44:31.509936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.237 qpair failed and we were unable to recover it. 00:34:47.237 [2024-07-14 09:44:31.510146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.237 [2024-07-14 09:44:31.510177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.237 qpair failed and we were unable to recover it. 00:34:47.237 [2024-07-14 09:44:31.510385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.237 [2024-07-14 09:44:31.510416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.237 qpair failed and we were unable to recover it. 00:34:47.237 [2024-07-14 09:44:31.510619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.237 [2024-07-14 09:44:31.510646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.237 qpair failed and we were unable to recover it. 00:34:47.237 [2024-07-14 09:44:31.510890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.237 [2024-07-14 09:44:31.510921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.237 qpair failed and we were unable to recover it. 00:34:47.237 [2024-07-14 09:44:31.511166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.237 [2024-07-14 09:44:31.511197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.237 qpair failed and we were unable to recover it. 00:34:47.237 [2024-07-14 09:44:31.511397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.237 [2024-07-14 09:44:31.511425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.237 qpair failed and we were unable to recover it. 00:34:47.237 [2024-07-14 09:44:31.511677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.237 [2024-07-14 09:44:31.511708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.237 qpair failed and we were unable to recover it. 00:34:47.237 [2024-07-14 09:44:31.511917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.237 [2024-07-14 09:44:31.511948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.237 qpair failed and we were unable to recover it. 00:34:47.237 [2024-07-14 09:44:31.512163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.237 [2024-07-14 09:44:31.512191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.237 qpair failed and we were unable to recover it. 00:34:47.237 [2024-07-14 09:44:31.512400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.237 [2024-07-14 09:44:31.512430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.237 qpair failed and we were unable to recover it. 00:34:47.237 [2024-07-14 09:44:31.512648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.237 [2024-07-14 09:44:31.512678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.237 qpair failed and we were unable to recover it. 00:34:47.237 [2024-07-14 09:44:31.512861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.237 [2024-07-14 09:44:31.512902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.237 qpair failed and we were unable to recover it. 00:34:47.237 [2024-07-14 09:44:31.513112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.237 [2024-07-14 09:44:31.513156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.237 qpair failed and we were unable to recover it. 00:34:47.237 [2024-07-14 09:44:31.513340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.237 [2024-07-14 09:44:31.513370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.237 qpair failed and we were unable to recover it. 00:34:47.237 [2024-07-14 09:44:31.513590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.237 [2024-07-14 09:44:31.513618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.237 qpair failed and we were unable to recover it. 00:34:47.237 [2024-07-14 09:44:31.513772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.237 [2024-07-14 09:44:31.513799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.237 qpair failed and we were unable to recover it. 00:34:47.237 [2024-07-14 09:44:31.514014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.237 [2024-07-14 09:44:31.514045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.237 qpair failed and we were unable to recover it. 00:34:47.237 [2024-07-14 09:44:31.514266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.237 [2024-07-14 09:44:31.514301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.237 qpair failed and we were unable to recover it. 00:34:47.237 [2024-07-14 09:44:31.514479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.237 [2024-07-14 09:44:31.514506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.237 qpair failed and we were unable to recover it. 00:34:47.237 [2024-07-14 09:44:31.514718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.237 [2024-07-14 09:44:31.514748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.237 qpair failed and we were unable to recover it. 00:34:47.237 [2024-07-14 09:44:31.514983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.237 [2024-07-14 09:44:31.515011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.237 qpair failed and we were unable to recover it. 00:34:47.237 [2024-07-14 09:44:31.515226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.237 [2024-07-14 09:44:31.515258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.237 qpair failed and we were unable to recover it. 00:34:47.237 [2024-07-14 09:44:31.515447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.237 [2024-07-14 09:44:31.515476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.237 qpair failed and we were unable to recover it. 00:34:47.237 [2024-07-14 09:44:31.515696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.237 [2024-07-14 09:44:31.515724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.237 qpair failed and we were unable to recover it. 00:34:47.237 [2024-07-14 09:44:31.515962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.237 [2024-07-14 09:44:31.515994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.237 qpair failed and we were unable to recover it. 00:34:47.237 [2024-07-14 09:44:31.516168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.237 [2024-07-14 09:44:31.516199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.237 qpair failed and we were unable to recover it. 00:34:47.237 [2024-07-14 09:44:31.516409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.237 [2024-07-14 09:44:31.516437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.237 qpair failed and we were unable to recover it. 00:34:47.237 [2024-07-14 09:44:31.516650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.237 [2024-07-14 09:44:31.516681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.237 qpair failed and we were unable to recover it. 00:34:47.237 [2024-07-14 09:44:31.516904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.237 [2024-07-14 09:44:31.516935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.237 qpair failed and we were unable to recover it. 00:34:47.237 [2024-07-14 09:44:31.517156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.237 [2024-07-14 09:44:31.517184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.237 qpair failed and we were unable to recover it. 00:34:47.237 [2024-07-14 09:44:31.517351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.237 [2024-07-14 09:44:31.517378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.237 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.517607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.517652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.517863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.517898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.518145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.518174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.518361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.518393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.518583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.518611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.518797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.518824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.519024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.519055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.519275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.519302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.519525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.519556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.519764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.519794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.519985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.520012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.520197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.520229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.520418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.520447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.520694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.520722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.520967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.520998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.521209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.521237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.521459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.521487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.521705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.521736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.521927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.521956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.522152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.522180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.522399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.522430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.522669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.522699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.522923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.522952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.523170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.523201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.523411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.523443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.523660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.523688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.523860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.523907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.524128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.524158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.524344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.524371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.524562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.524592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.524854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.524890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.525091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.525119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.525310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.525338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.525525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.525553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.525781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.525809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.526044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.526075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.526294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.526325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.526563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.526591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.526760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.526789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.527032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.527064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.527278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.527306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.527543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.527573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.527791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.527822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.528037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.528066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.528282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.528313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.528512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.528542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.528749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.528777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.528972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.529001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.529168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.529195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.529367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.529396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.529638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.238 [2024-07-14 09:44:31.529665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.238 qpair failed and we were unable to recover it. 00:34:47.238 [2024-07-14 09:44:31.529870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.529898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.530055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.530083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.530307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.530338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.530552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.530581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.530789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.530819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.531012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.531040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.531261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.531292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.531537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.531564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.531779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.531809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.532053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.532081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.532246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.532274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.532492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.532523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.532711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.532740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.532953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.532981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.533228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.533259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.533469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.533499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.533720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.533748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.533938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.533969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.534174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.534204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.534437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.534465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.534719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.534750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.534993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.535021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.535185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.535213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.535380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.535408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.535624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.535652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.535873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.535901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.536115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.536147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.536393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.536423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.536643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.536671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.536878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.536922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.537134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.537166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.537381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.537410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.537656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.537687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.537909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.537941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.538189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.538217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.538461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.538488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.538712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.538742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.538968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.538997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.539210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.539240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.539443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.539474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.539701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.539729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.539968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.540001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.540239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.540274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.540469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.540497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.540710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.540755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.541000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.541028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.541197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.541225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.541411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.541438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.541685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.541715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.541956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.541984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.542176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.542208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.542393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.542424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.239 [2024-07-14 09:44:31.542631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.239 [2024-07-14 09:44:31.542658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.239 qpair failed and we were unable to recover it. 00:34:47.240 [2024-07-14 09:44:31.542901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.240 [2024-07-14 09:44:31.542932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.240 qpair failed and we were unable to recover it. 00:34:47.240 [2024-07-14 09:44:31.543143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.240 [2024-07-14 09:44:31.543174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.240 qpair failed and we were unable to recover it. 00:34:47.240 [2024-07-14 09:44:31.543363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.240 [2024-07-14 09:44:31.543390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.240 qpair failed and we were unable to recover it. 00:34:47.240 [2024-07-14 09:44:31.543589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.240 [2024-07-14 09:44:31.543620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.240 qpair failed and we were unable to recover it. 00:34:47.240 [2024-07-14 09:44:31.543827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.240 [2024-07-14 09:44:31.543858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.240 qpair failed and we were unable to recover it. 00:34:47.240 [2024-07-14 09:44:31.544078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.240 [2024-07-14 09:44:31.544105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.240 qpair failed and we were unable to recover it. 00:34:47.240 [2024-07-14 09:44:31.544343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.240 [2024-07-14 09:44:31.544374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.240 qpair failed and we were unable to recover it. 00:34:47.240 [2024-07-14 09:44:31.544593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.240 [2024-07-14 09:44:31.544624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.240 qpair failed and we were unable to recover it. 00:34:47.240 [2024-07-14 09:44:31.544877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.240 [2024-07-14 09:44:31.544905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.240 qpair failed and we were unable to recover it. 00:34:47.240 [2024-07-14 09:44:31.545142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.240 [2024-07-14 09:44:31.545172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.240 qpair failed and we were unable to recover it. 00:34:47.240 [2024-07-14 09:44:31.545410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.240 [2024-07-14 09:44:31.545440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.240 qpair failed and we were unable to recover it. 00:34:47.240 [2024-07-14 09:44:31.545677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.240 [2024-07-14 09:44:31.545705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.240 qpair failed and we were unable to recover it. 00:34:47.240 [2024-07-14 09:44:31.545896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.240 [2024-07-14 09:44:31.545927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.240 qpair failed and we were unable to recover it. 00:34:47.240 [2024-07-14 09:44:31.546142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.240 [2024-07-14 09:44:31.546169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.240 qpair failed and we were unable to recover it. 00:34:47.240 [2024-07-14 09:44:31.546387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.240 [2024-07-14 09:44:31.546414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.240 qpair failed and we were unable to recover it. 00:34:47.240 [2024-07-14 09:44:31.546662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.240 [2024-07-14 09:44:31.546690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.240 qpair failed and we were unable to recover it. 00:34:47.240 [2024-07-14 09:44:31.546932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.240 [2024-07-14 09:44:31.546963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.240 qpair failed and we were unable to recover it. 00:34:47.240 [2024-07-14 09:44:31.547186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.240 [2024-07-14 09:44:31.547214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.240 qpair failed and we were unable to recover it. 00:34:47.240 [2024-07-14 09:44:31.547449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.240 [2024-07-14 09:44:31.547479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.240 qpair failed and we were unable to recover it. 00:34:47.240 [2024-07-14 09:44:31.547692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.240 [2024-07-14 09:44:31.547723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.240 qpair failed and we were unable to recover it. 00:34:47.240 [2024-07-14 09:44:31.547939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.240 [2024-07-14 09:44:31.547967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.240 qpair failed and we were unable to recover it. 00:34:47.240 [2024-07-14 09:44:31.548164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.240 [2024-07-14 09:44:31.548192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.240 qpair failed and we were unable to recover it. 00:34:47.240 [2024-07-14 09:44:31.548441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.240 [2024-07-14 09:44:31.548470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.240 qpair failed and we were unable to recover it. 00:34:47.240 [2024-07-14 09:44:31.548691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.240 [2024-07-14 09:44:31.548722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.240 qpair failed and we were unable to recover it. 00:34:47.240 [2024-07-14 09:44:31.548966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.240 [2024-07-14 09:44:31.548994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.240 qpair failed and we were unable to recover it. 00:34:47.240 [2024-07-14 09:44:31.549375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.240 [2024-07-14 09:44:31.549425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.240 qpair failed and we were unable to recover it. 00:34:47.240 [2024-07-14 09:44:31.549677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.240 [2024-07-14 09:44:31.549705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.240 qpair failed and we were unable to recover it. 00:34:47.240 [2024-07-14 09:44:31.549924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.240 [2024-07-14 09:44:31.549968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.240 qpair failed and we were unable to recover it. 00:34:47.240 [2024-07-14 09:44:31.550166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.240 [2024-07-14 09:44:31.550194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.240 qpair failed and we were unable to recover it. 00:34:47.240 [2024-07-14 09:44:31.550415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.240 [2024-07-14 09:44:31.550473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.240 qpair failed and we were unable to recover it. 00:34:47.240 [2024-07-14 09:44:31.550718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.240 [2024-07-14 09:44:31.550749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.240 qpair failed and we were unable to recover it. 00:34:47.240 [2024-07-14 09:44:31.550965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.240 [2024-07-14 09:44:31.550996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.240 qpair failed and we were unable to recover it. 00:34:47.240 [2024-07-14 09:44:31.551232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.240 [2024-07-14 09:44:31.551260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.240 qpair failed and we were unable to recover it. 00:34:47.240 [2024-07-14 09:44:31.551625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.240 [2024-07-14 09:44:31.551680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.240 qpair failed and we were unable to recover it. 00:34:47.240 [2024-07-14 09:44:31.551897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.240 [2024-07-14 09:44:31.551925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.240 qpair failed and we were unable to recover it. 00:34:47.240 [2024-07-14 09:44:31.552134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.240 [2024-07-14 09:44:31.552165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.240 qpair failed and we were unable to recover it. 00:34:47.240 [2024-07-14 09:44:31.552395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.240 [2024-07-14 09:44:31.552422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.240 qpair failed and we were unable to recover it. 00:34:47.240 [2024-07-14 09:44:31.552624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.240 [2024-07-14 09:44:31.552651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.240 qpair failed and we were unable to recover it. 00:34:47.240 [2024-07-14 09:44:31.552839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.240 [2024-07-14 09:44:31.552872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.240 qpair failed and we were unable to recover it. 00:34:47.240 [2024-07-14 09:44:31.553099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.240 [2024-07-14 09:44:31.553127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.240 qpair failed and we were unable to recover it. 00:34:47.240 [2024-07-14 09:44:31.553319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.240 [2024-07-14 09:44:31.553347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.240 qpair failed and we were unable to recover it. 00:34:47.240 [2024-07-14 09:44:31.553675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.240 [2024-07-14 09:44:31.553735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.240 qpair failed and we were unable to recover it. 00:34:47.240 [2024-07-14 09:44:31.553946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.240 [2024-07-14 09:44:31.553977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.240 qpair failed and we were unable to recover it. 00:34:47.240 [2024-07-14 09:44:31.554222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.554250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.241 [2024-07-14 09:44:31.554414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.554442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.241 [2024-07-14 09:44:31.554656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.554687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.241 [2024-07-14 09:44:31.554927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.554959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.241 [2024-07-14 09:44:31.555168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.555198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.241 [2024-07-14 09:44:31.555438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.555466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.241 [2024-07-14 09:44:31.555720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.555747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.241 [2024-07-14 09:44:31.555930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.555962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.241 [2024-07-14 09:44:31.556176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.556207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.241 [2024-07-14 09:44:31.556398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.556425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.241 [2024-07-14 09:44:31.556809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.556861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.241 [2024-07-14 09:44:31.557092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.557120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.241 [2024-07-14 09:44:31.557285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.557314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.241 [2024-07-14 09:44:31.557531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.557558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.241 [2024-07-14 09:44:31.557770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.557800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.241 [2024-07-14 09:44:31.557998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.558030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.241 [2024-07-14 09:44:31.558231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.558262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.241 [2024-07-14 09:44:31.558476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.558504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.241 [2024-07-14 09:44:31.558671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.558699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.241 [2024-07-14 09:44:31.558913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.558944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.241 [2024-07-14 09:44:31.559185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.559213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.241 [2024-07-14 09:44:31.559434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.559462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.241 [2024-07-14 09:44:31.559863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.559927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.241 [2024-07-14 09:44:31.560138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.560169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.241 [2024-07-14 09:44:31.560342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.560373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.241 [2024-07-14 09:44:31.560606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.560634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.241 [2024-07-14 09:44:31.560857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.560901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.241 [2024-07-14 09:44:31.561118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.561149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.241 [2024-07-14 09:44:31.561383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.561414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.241 [2024-07-14 09:44:31.561657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.561685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.241 [2024-07-14 09:44:31.561940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.561971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.241 [2024-07-14 09:44:31.562147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.562177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.241 [2024-07-14 09:44:31.562413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.562444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.241 [2024-07-14 09:44:31.562667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.562695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.241 [2024-07-14 09:44:31.562890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.562918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.241 [2024-07-14 09:44:31.563139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.563169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.241 [2024-07-14 09:44:31.563377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.563407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.241 [2024-07-14 09:44:31.563640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.563668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.241 [2024-07-14 09:44:31.563908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.563939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.241 [2024-07-14 09:44:31.564177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.564208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.241 [2024-07-14 09:44:31.564461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.564492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.241 [2024-07-14 09:44:31.564745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.564773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.241 [2024-07-14 09:44:31.564967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.564997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.241 [2024-07-14 09:44:31.565232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.565263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.241 [2024-07-14 09:44:31.565498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.565526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.241 [2024-07-14 09:44:31.565713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.565740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.241 [2024-07-14 09:44:31.565958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.565989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.241 [2024-07-14 09:44:31.566195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.566225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.241 [2024-07-14 09:44:31.566428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.566458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.241 [2024-07-14 09:44:31.566669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.566697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.241 [2024-07-14 09:44:31.567027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.241 [2024-07-14 09:44:31.567083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.241 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.567295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.567326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.567511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.567542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.567752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.567780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.568035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.568066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.568272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.568303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.568513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.568542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.568763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.568791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.568998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.569029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.569212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.569244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.569489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.569517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.569739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.569767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.570053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.570084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.570293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.570324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.570511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.570543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.570756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.570784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.571025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.571058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.571286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.571330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.571566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.571597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.571777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.571805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.572045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.572077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.572296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.572327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.572538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.572568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.572787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.572814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.573067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.573098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.573295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.573325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.573512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.573544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.573727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.573755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.573971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.574015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.574229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.574257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.574449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.574477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.574641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.574670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.574918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.574949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.575166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.575194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.575438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.575469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.575715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.575743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.575940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.575968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.576175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.576206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.576438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.576469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.576665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.576693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.576861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.576895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.577110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.577140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.577349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.577380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.577629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.577657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.577914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.577945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.578164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.578194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.578426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.578457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.578670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.578697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.578921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.578952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.579141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.579171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.579406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.579436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.579617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.579644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.579860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.579919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.242 [2024-07-14 09:44:31.580106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.242 [2024-07-14 09:44:31.580138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.242 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.580380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.580408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.580625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.580653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.580878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.580914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.581123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.581154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.581327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.581358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.581569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.581596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.581808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.581839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.582062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.582092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.582302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.582333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.582511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.582539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.582861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.582919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.583144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.583171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.583327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.583355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.583570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.583597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.583854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.583897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.584091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.584118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.584342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.584386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.584600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.584628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.584843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.584880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.585073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.585104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.585313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.585341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.585558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.585586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.585809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.585839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.586082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.586113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.586331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.586359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.586552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.586580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.586821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.586852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.587069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.587099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.587338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.587366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.587564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.587592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.587838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.587875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.588091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.588121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.588357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.588388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.588632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.588659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.588879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.588907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.589121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.589152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.589328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.589360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.589604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.589632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.589857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.589899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.590112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.590144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.590361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.590389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.590582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.590610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.590801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.590836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.591055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.591086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.591329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.591356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.591524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.591551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.591731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.591761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.591970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.592001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.592221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.592248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.592437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.592464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.592883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.243 [2024-07-14 09:44:31.592933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.243 qpair failed and we were unable to recover it. 00:34:47.243 [2024-07-14 09:44:31.593169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.244 [2024-07-14 09:44:31.593199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.244 qpair failed and we were unable to recover it. 00:34:47.244 [2024-07-14 09:44:31.593416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.244 [2024-07-14 09:44:31.593443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.244 qpair failed and we were unable to recover it. 00:34:47.244 [2024-07-14 09:44:31.593631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.244 [2024-07-14 09:44:31.593659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.244 qpair failed and we were unable to recover it. 00:34:47.244 [2024-07-14 09:44:31.593878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.244 [2024-07-14 09:44:31.593909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.244 qpair failed and we were unable to recover it. 00:34:47.244 [2024-07-14 09:44:31.594154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.244 [2024-07-14 09:44:31.594182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.244 qpair failed and we were unable to recover it. 00:34:47.244 [2024-07-14 09:44:31.594375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.244 [2024-07-14 09:44:31.594402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.244 qpair failed and we were unable to recover it. 00:34:47.244 [2024-07-14 09:44:31.594592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.244 [2024-07-14 09:44:31.594620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.244 qpair failed and we were unable to recover it. 00:34:47.244 [2024-07-14 09:44:31.594838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.244 [2024-07-14 09:44:31.594875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.244 qpair failed and we were unable to recover it. 00:34:47.244 [2024-07-14 09:44:31.595088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.244 [2024-07-14 09:44:31.595120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.244 qpair failed and we were unable to recover it. 00:34:47.244 [2024-07-14 09:44:31.595299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.244 [2024-07-14 09:44:31.595329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.244 qpair failed and we were unable to recover it. 00:34:47.244 [2024-07-14 09:44:31.595566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.244 [2024-07-14 09:44:31.595594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.244 qpair failed and we were unable to recover it. 00:34:47.244 [2024-07-14 09:44:31.595809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.244 [2024-07-14 09:44:31.595840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.244 qpair failed and we were unable to recover it. 00:34:47.244 [2024-07-14 09:44:31.596096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.244 [2024-07-14 09:44:31.596124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.244 qpair failed and we were unable to recover it. 00:34:47.244 [2024-07-14 09:44:31.596344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.244 [2024-07-14 09:44:31.596388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.244 qpair failed and we were unable to recover it. 00:34:47.244 [2024-07-14 09:44:31.596600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.244 [2024-07-14 09:44:31.596627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.244 qpair failed and we were unable to recover it. 00:34:47.244 [2024-07-14 09:44:31.596814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.244 [2024-07-14 09:44:31.596841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.244 qpair failed and we were unable to recover it. 00:34:47.244 [2024-07-14 09:44:31.597031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.244 [2024-07-14 09:44:31.597060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.244 qpair failed and we were unable to recover it. 00:34:47.244 [2024-07-14 09:44:31.597241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.244 [2024-07-14 09:44:31.597271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.244 qpair failed and we were unable to recover it. 00:34:47.244 [2024-07-14 09:44:31.597491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.244 [2024-07-14 09:44:31.597534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.244 qpair failed and we were unable to recover it. 00:34:47.244 [2024-07-14 09:44:31.597773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.244 [2024-07-14 09:44:31.597804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.244 qpair failed and we were unable to recover it. 00:34:47.244 [2024-07-14 09:44:31.597998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.244 [2024-07-14 09:44:31.598031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.244 qpair failed and we were unable to recover it. 00:34:47.244 [2024-07-14 09:44:31.598268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.244 [2024-07-14 09:44:31.598299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.244 qpair failed and we were unable to recover it. 00:34:47.244 [2024-07-14 09:44:31.598512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.244 [2024-07-14 09:44:31.598554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.244 qpair failed and we were unable to recover it. 00:34:47.244 [2024-07-14 09:44:31.598748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.244 [2024-07-14 09:44:31.598775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.244 qpair failed and we were unable to recover it. 00:34:47.244 [2024-07-14 09:44:31.598974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.244 [2024-07-14 09:44:31.599005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.244 qpair failed and we were unable to recover it. 00:34:47.244 [2024-07-14 09:44:31.599186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.244 [2024-07-14 09:44:31.599216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.244 qpair failed and we were unable to recover it. 00:34:47.244 [2024-07-14 09:44:31.599441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.244 [2024-07-14 09:44:31.599469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.244 qpair failed and we were unable to recover it. 00:34:47.244 [2024-07-14 09:44:31.599755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.244 [2024-07-14 09:44:31.599811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.244 qpair failed and we were unable to recover it. 00:34:47.244 [2024-07-14 09:44:31.600045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.244 [2024-07-14 09:44:31.600076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.244 qpair failed and we were unable to recover it. 00:34:47.244 [2024-07-14 09:44:31.600258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.244 [2024-07-14 09:44:31.600292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.244 qpair failed and we were unable to recover it. 00:34:47.244 [2024-07-14 09:44:31.600523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.244 [2024-07-14 09:44:31.600549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.244 qpair failed and we were unable to recover it. 00:34:47.244 [2024-07-14 09:44:31.600762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.244 [2024-07-14 09:44:31.600793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.244 qpair failed and we were unable to recover it. 00:34:47.244 [2024-07-14 09:44:31.600994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.244 [2024-07-14 09:44:31.601023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.244 qpair failed and we were unable to recover it. 00:34:47.244 [2024-07-14 09:44:31.601238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.244 [2024-07-14 09:44:31.601270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.244 qpair failed and we were unable to recover it. 00:34:47.244 [2024-07-14 09:44:31.601535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.244 [2024-07-14 09:44:31.601563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.244 qpair failed and we were unable to recover it. 00:34:47.244 [2024-07-14 09:44:31.601802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.244 [2024-07-14 09:44:31.601833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.244 qpair failed and we were unable to recover it. 00:34:47.244 [2024-07-14 09:44:31.602085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.244 [2024-07-14 09:44:31.602116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.244 qpair failed and we were unable to recover it. 00:34:47.244 [2024-07-14 09:44:31.602325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.244 [2024-07-14 09:44:31.602355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.244 qpair failed and we were unable to recover it. 00:34:47.244 [2024-07-14 09:44:31.602536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.244 [2024-07-14 09:44:31.602563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.244 qpair failed and we were unable to recover it. 00:34:47.244 [2024-07-14 09:44:31.602734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.244 [2024-07-14 09:44:31.602761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.244 qpair failed and we were unable to recover it. 00:34:47.244 [2024-07-14 09:44:31.603066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.244 [2024-07-14 09:44:31.603096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.244 qpair failed and we were unable to recover it. 00:34:47.244 [2024-07-14 09:44:31.603308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.244 [2024-07-14 09:44:31.603338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.244 qpair failed and we were unable to recover it. 00:34:47.244 [2024-07-14 09:44:31.603537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.244 [2024-07-14 09:44:31.603579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.244 qpair failed and we were unable to recover it. 00:34:47.244 [2024-07-14 09:44:31.603754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.244 [2024-07-14 09:44:31.603781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.244 qpair failed and we were unable to recover it. 00:34:47.244 [2024-07-14 09:44:31.603966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.244 [2024-07-14 09:44:31.604006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.244 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.604204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.604235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.604450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.604494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.604738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.604769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.604988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.605016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.605253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.605283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.605512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.605538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.605721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.605749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.605966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.605997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.606228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.606258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.606474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.606501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.606724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.606754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.606973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.607002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.607241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.607272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.607491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.607518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.607864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.607925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.608136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.608166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.608376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.608407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.608654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.608682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.608878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.608909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.609151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.609178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.609416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.609446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.609687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.609714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.609918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.609947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.610191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.610222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.610442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.610469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.610685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.610712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.610910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.610946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.611161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.611191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.611375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.611405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.611636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.611663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.611919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.611949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.612133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.612163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.612375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.612405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.612608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.612636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.612839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.612876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.613115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.613145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.613348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.613376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.613565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.613593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.613788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.613819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.614049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.614078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.614343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.614374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.614556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.614584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.614758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.614786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.614956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.614984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.615236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.615264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.615459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.615487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.615674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.615702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.615901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.615929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.616143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.616187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.616364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.616391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.245 [2024-07-14 09:44:31.616580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.245 [2024-07-14 09:44:31.616611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.245 qpair failed and we were unable to recover it. 00:34:47.246 [2024-07-14 09:44:31.616824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.246 [2024-07-14 09:44:31.616855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.246 qpair failed and we were unable to recover it. 00:34:47.246 [2024-07-14 09:44:31.617049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.246 [2024-07-14 09:44:31.617081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.246 qpair failed and we were unable to recover it. 00:34:47.246 [2024-07-14 09:44:31.617306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.246 [2024-07-14 09:44:31.617334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.246 qpair failed and we were unable to recover it. 00:34:47.246 [2024-07-14 09:44:31.617552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.246 [2024-07-14 09:44:31.617583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.246 qpair failed and we were unable to recover it. 00:34:47.246 [2024-07-14 09:44:31.617800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.246 [2024-07-14 09:44:31.617829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.246 qpair failed and we were unable to recover it. 00:34:47.246 [2024-07-14 09:44:31.618047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.246 [2024-07-14 09:44:31.618074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.246 qpair failed and we were unable to recover it. 00:34:47.246 [2024-07-14 09:44:31.618290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.246 [2024-07-14 09:44:31.618318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.246 qpair failed and we were unable to recover it. 00:34:47.246 [2024-07-14 09:44:31.618607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.246 [2024-07-14 09:44:31.618660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.246 qpair failed and we were unable to recover it. 00:34:47.246 [2024-07-14 09:44:31.618876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.246 [2024-07-14 09:44:31.618908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.246 qpair failed and we were unable to recover it. 00:34:47.246 [2024-07-14 09:44:31.619116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.246 [2024-07-14 09:44:31.619147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.246 qpair failed and we were unable to recover it. 00:34:47.246 [2024-07-14 09:44:31.619369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.246 [2024-07-14 09:44:31.619395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.246 qpair failed and we were unable to recover it. 00:34:47.246 [2024-07-14 09:44:31.619788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.246 [2024-07-14 09:44:31.619845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.246 qpair failed and we were unable to recover it. 00:34:47.246 [2024-07-14 09:44:31.620075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.246 [2024-07-14 09:44:31.620106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.246 qpair failed and we were unable to recover it. 00:34:47.246 [2024-07-14 09:44:31.620291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.246 [2024-07-14 09:44:31.620322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.246 qpair failed and we were unable to recover it. 00:34:47.246 [2024-07-14 09:44:31.620533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.246 [2024-07-14 09:44:31.620561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.246 qpair failed and we were unable to recover it. 00:34:47.246 [2024-07-14 09:44:31.620755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.246 [2024-07-14 09:44:31.620791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.246 qpair failed and we were unable to recover it. 00:34:47.246 [2024-07-14 09:44:31.621001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.246 [2024-07-14 09:44:31.621032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.246 qpair failed and we were unable to recover it. 00:34:47.246 [2024-07-14 09:44:31.621269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.246 [2024-07-14 09:44:31.621299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.246 qpair failed and we were unable to recover it. 00:34:47.246 [2024-07-14 09:44:31.621480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.246 [2024-07-14 09:44:31.621508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.246 qpair failed and we were unable to recover it. 00:34:47.246 [2024-07-14 09:44:31.621687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.246 [2024-07-14 09:44:31.621714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.246 qpair failed and we were unable to recover it. 00:34:47.246 [2024-07-14 09:44:31.621907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.246 [2024-07-14 09:44:31.621952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.246 qpair failed and we were unable to recover it. 00:34:47.246 [2024-07-14 09:44:31.622169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.246 [2024-07-14 09:44:31.622200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.246 qpair failed and we were unable to recover it. 00:34:47.246 [2024-07-14 09:44:31.622393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.246 [2024-07-14 09:44:31.622420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.246 qpair failed and we were unable to recover it. 00:34:47.246 [2024-07-14 09:44:31.622648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.246 [2024-07-14 09:44:31.622675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.246 qpair failed and we were unable to recover it. 00:34:47.246 [2024-07-14 09:44:31.622874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.246 [2024-07-14 09:44:31.622902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.246 qpair failed and we were unable to recover it. 00:34:47.246 [2024-07-14 09:44:31.623071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.246 [2024-07-14 09:44:31.623099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.246 qpair failed and we were unable to recover it. 00:34:47.246 [2024-07-14 09:44:31.623258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.246 [2024-07-14 09:44:31.623287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.246 qpair failed and we were unable to recover it. 00:34:47.246 [2024-07-14 09:44:31.623462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.246 [2024-07-14 09:44:31.623489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.246 qpair failed and we were unable to recover it. 00:34:47.246 [2024-07-14 09:44:31.623678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.246 [2024-07-14 09:44:31.623705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.246 qpair failed and we were unable to recover it. 00:34:47.246 [2024-07-14 09:44:31.623880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.246 [2024-07-14 09:44:31.623910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.246 qpair failed and we were unable to recover it. 00:34:47.246 [2024-07-14 09:44:31.624102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.246 [2024-07-14 09:44:31.624129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.246 qpair failed and we were unable to recover it. 00:34:47.246 [2024-07-14 09:44:31.624347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.246 [2024-07-14 09:44:31.624375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.246 qpair failed and we were unable to recover it. 00:34:47.246 [2024-07-14 09:44:31.624541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.246 [2024-07-14 09:44:31.624569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.246 qpair failed and we were unable to recover it. 00:34:47.246 [2024-07-14 09:44:31.624765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.246 [2024-07-14 09:44:31.624792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.246 qpair failed and we were unable to recover it. 00:34:47.246 [2024-07-14 09:44:31.624959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.246 [2024-07-14 09:44:31.624988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.246 qpair failed and we were unable to recover it. 00:34:47.246 [2024-07-14 09:44:31.625202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.246 [2024-07-14 09:44:31.625230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.246 qpair failed and we were unable to recover it. 00:34:47.246 [2024-07-14 09:44:31.625387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.246 [2024-07-14 09:44:31.625415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.246 qpair failed and we were unable to recover it. 00:34:47.246 [2024-07-14 09:44:31.625655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.246 [2024-07-14 09:44:31.625685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.246 qpair failed and we were unable to recover it. 00:34:47.246 [2024-07-14 09:44:31.625929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.246 [2024-07-14 09:44:31.625957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.246 qpair failed and we were unable to recover it. 00:34:47.246 [2024-07-14 09:44:31.626125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.246 [2024-07-14 09:44:31.626153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.246 qpair failed and we were unable to recover it. 00:34:47.246 [2024-07-14 09:44:31.626319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.246 [2024-07-14 09:44:31.626347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.246 qpair failed and we were unable to recover it. 00:34:47.246 [2024-07-14 09:44:31.626567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.246 [2024-07-14 09:44:31.626598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.246 qpair failed and we were unable to recover it. 00:34:47.246 [2024-07-14 09:44:31.626817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.246 [2024-07-14 09:44:31.626844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.246 qpair failed and we were unable to recover it. 00:34:47.246 [2024-07-14 09:44:31.627020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.246 [2024-07-14 09:44:31.627047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.246 qpair failed and we were unable to recover it. 00:34:47.246 [2024-07-14 09:44:31.627247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.246 [2024-07-14 09:44:31.627275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.246 qpair failed and we were unable to recover it. 00:34:47.246 [2024-07-14 09:44:31.627435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.246 [2024-07-14 09:44:31.627463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.246 qpair failed and we were unable to recover it. 00:34:47.246 [2024-07-14 09:44:31.627651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.246 [2024-07-14 09:44:31.627680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.246 qpair failed and we were unable to recover it. 00:34:47.246 [2024-07-14 09:44:31.627887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.246 [2024-07-14 09:44:31.627916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.246 qpair failed and we were unable to recover it. 00:34:47.246 [2024-07-14 09:44:31.628083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.246 [2024-07-14 09:44:31.628110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.246 qpair failed and we were unable to recover it. 00:34:47.246 [2024-07-14 09:44:31.628325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.246 [2024-07-14 09:44:31.628353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.247 qpair failed and we were unable to recover it. 00:34:47.247 [2024-07-14 09:44:31.628566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.247 [2024-07-14 09:44:31.628593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.247 qpair failed and we were unable to recover it. 00:34:47.247 [2024-07-14 09:44:31.628762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.247 [2024-07-14 09:44:31.628789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.247 qpair failed and we were unable to recover it. 00:34:47.247 [2024-07-14 09:44:31.628958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.247 [2024-07-14 09:44:31.628986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.247 qpair failed and we were unable to recover it. 00:34:47.247 [2024-07-14 09:44:31.629210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.247 [2024-07-14 09:44:31.629238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.247 qpair failed and we were unable to recover it. 00:34:47.247 [2024-07-14 09:44:31.629435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.247 [2024-07-14 09:44:31.629462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.247 qpair failed and we were unable to recover it. 00:34:47.247 [2024-07-14 09:44:31.629645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.247 [2024-07-14 09:44:31.629677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.247 qpair failed and we were unable to recover it. 00:34:47.247 [2024-07-14 09:44:31.629874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.247 [2024-07-14 09:44:31.629902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.247 qpair failed and we were unable to recover it. 00:34:47.247 [2024-07-14 09:44:31.630103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.247 [2024-07-14 09:44:31.630134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.247 qpair failed and we were unable to recover it. 00:34:47.247 [2024-07-14 09:44:31.630379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.247 [2024-07-14 09:44:31.630406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.247 qpair failed and we were unable to recover it. 00:34:47.247 [2024-07-14 09:44:31.630788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.247 [2024-07-14 09:44:31.630844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.247 qpair failed and we were unable to recover it. 00:34:47.247 [2024-07-14 09:44:31.631043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.247 [2024-07-14 09:44:31.631072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.247 qpair failed and we were unable to recover it. 00:34:47.247 [2024-07-14 09:44:31.631284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.247 [2024-07-14 09:44:31.631315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.247 qpair failed and we were unable to recover it. 00:34:47.247 [2024-07-14 09:44:31.631500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.247 [2024-07-14 09:44:31.631527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.247 qpair failed and we were unable to recover it. 00:34:47.247 [2024-07-14 09:44:31.631782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.247 [2024-07-14 09:44:31.631835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.247 qpair failed and we were unable to recover it. 00:34:47.247 [2024-07-14 09:44:31.632055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.247 [2024-07-14 09:44:31.632086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.247 qpair failed and we were unable to recover it. 00:34:47.247 [2024-07-14 09:44:31.632277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.247 [2024-07-14 09:44:31.632306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.247 qpair failed and we were unable to recover it. 00:34:47.247 [2024-07-14 09:44:31.632470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.247 [2024-07-14 09:44:31.632498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.247 qpair failed and we were unable to recover it. 00:34:47.247 [2024-07-14 09:44:31.632723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.247 [2024-07-14 09:44:31.632750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.247 qpair failed and we were unable to recover it. 00:34:47.247 [2024-07-14 09:44:31.633006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.247 [2024-07-14 09:44:31.633036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.247 qpair failed and we were unable to recover it. 00:34:47.247 [2024-07-14 09:44:31.633220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.247 [2024-07-14 09:44:31.633251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.247 qpair failed and we were unable to recover it. 00:34:47.247 [2024-07-14 09:44:31.633485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.247 [2024-07-14 09:44:31.633513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.247 qpair failed and we were unable to recover it. 00:34:47.247 [2024-07-14 09:44:31.633707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.247 [2024-07-14 09:44:31.633735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.247 qpair failed and we were unable to recover it. 00:34:47.247 [2024-07-14 09:44:31.633923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.247 [2024-07-14 09:44:31.633952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.247 qpair failed and we were unable to recover it. 00:34:47.247 [2024-07-14 09:44:31.634150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.247 [2024-07-14 09:44:31.634176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.247 qpair failed and we were unable to recover it. 00:34:47.247 [2024-07-14 09:44:31.634367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.247 [2024-07-14 09:44:31.634394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.247 qpair failed and we were unable to recover it. 00:34:47.247 [2024-07-14 09:44:31.634555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.247 [2024-07-14 09:44:31.634582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.247 qpair failed and we were unable to recover it. 00:34:47.247 [2024-07-14 09:44:31.634740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.247 [2024-07-14 09:44:31.634768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.247 qpair failed and we were unable to recover it. 00:34:47.247 [2024-07-14 09:44:31.634961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.247 [2024-07-14 09:44:31.634990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.247 qpair failed and we were unable to recover it. 00:34:47.247 [2024-07-14 09:44:31.635206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.247 [2024-07-14 09:44:31.635233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.247 qpair failed and we were unable to recover it. 00:34:47.247 [2024-07-14 09:44:31.635452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.247 [2024-07-14 09:44:31.635480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.247 qpair failed and we were unable to recover it. 00:34:47.247 [2024-07-14 09:44:31.635718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.247 [2024-07-14 09:44:31.635748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.247 qpair failed and we were unable to recover it. 00:34:47.247 [2024-07-14 09:44:31.635980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.247 [2024-07-14 09:44:31.636012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.247 qpair failed and we were unable to recover it. 00:34:47.247 [2024-07-14 09:44:31.636259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.247 [2024-07-14 09:44:31.636287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.247 qpair failed and we were unable to recover it. 00:34:47.247 [2024-07-14 09:44:31.636500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.247 [2024-07-14 09:44:31.636527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.247 qpair failed and we were unable to recover it. 00:34:47.247 [2024-07-14 09:44:31.636698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.247 [2024-07-14 09:44:31.636726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.247 qpair failed and we were unable to recover it. 00:34:47.247 [2024-07-14 09:44:31.636950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.247 [2024-07-14 09:44:31.636978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.247 qpair failed and we were unable to recover it. 00:34:47.247 [2024-07-14 09:44:31.637146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.247 [2024-07-14 09:44:31.637175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.247 qpair failed and we were unable to recover it. 00:34:47.247 [2024-07-14 09:44:31.637357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.247 [2024-07-14 09:44:31.637385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.247 qpair failed and we were unable to recover it. 00:34:47.247 [2024-07-14 09:44:31.637556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.247 [2024-07-14 09:44:31.637584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.247 qpair failed and we were unable to recover it. 00:34:47.247 [2024-07-14 09:44:31.637769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.247 [2024-07-14 09:44:31.637797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.247 qpair failed and we were unable to recover it. 00:34:47.247 [2024-07-14 09:44:31.638014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.247 [2024-07-14 09:44:31.638043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.247 qpair failed and we were unable to recover it. 00:34:47.247 [2024-07-14 09:44:31.638244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.247 [2024-07-14 09:44:31.638272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.247 qpair failed and we were unable to recover it. 00:34:47.247 [2024-07-14 09:44:31.638459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.247 [2024-07-14 09:44:31.638487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.247 qpair failed and we were unable to recover it. 00:34:47.247 [2024-07-14 09:44:31.638659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.247 [2024-07-14 09:44:31.638687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.247 qpair failed and we were unable to recover it. 00:34:47.247 [2024-07-14 09:44:31.638872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.638900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.639067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.639099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.639292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.639319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.639482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.639510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.639726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.639753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.639933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.639964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.640202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.640229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.640409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.640439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.640688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.640715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.640885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.640913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.641078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.641105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.641277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.641304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.641462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.641490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.641705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.641732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.641961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.641992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.642215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.642246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.642450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.642477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.642695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.642723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.642886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.642912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.643076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.643104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.643322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.643350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.643538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.643566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.643739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.643767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.643978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.644008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.644309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.644339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.644686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.644742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.644977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.645005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.645187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.645214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.645407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.645449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.645687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.645734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.645935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.645965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.646134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.646162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.646407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.646451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.646673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.646719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.646925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.646966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.647171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.647202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.647434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.647479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.647734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.647774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.648038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.648084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.648339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.648389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.648798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.648849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.649089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.649123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.649356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.649402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.649590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.649637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.649809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.649849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.650090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.650137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.650367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.650410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.650638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.650682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.650877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.650904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.651132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.651175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.651408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.651460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.248 [2024-07-14 09:44:31.651682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.248 [2024-07-14 09:44:31.651714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.248 qpair failed and we were unable to recover it. 00:34:47.249 [2024-07-14 09:44:31.651925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.249 [2024-07-14 09:44:31.651954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.249 qpair failed and we were unable to recover it. 00:34:47.249 [2024-07-14 09:44:31.652208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.249 [2024-07-14 09:44:31.652254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.249 qpair failed and we were unable to recover it. 00:34:47.249 [2024-07-14 09:44:31.652471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.249 [2024-07-14 09:44:31.652519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.249 qpair failed and we were unable to recover it. 00:34:47.249 [2024-07-14 09:44:31.652727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.249 [2024-07-14 09:44:31.652756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.249 qpair failed and we were unable to recover it. 00:34:47.249 [2024-07-14 09:44:31.652942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.249 [2024-07-14 09:44:31.652990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.249 qpair failed and we were unable to recover it. 00:34:47.249 [2024-07-14 09:44:31.653223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.249 [2024-07-14 09:44:31.653253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.249 qpair failed and we were unable to recover it. 00:34:47.249 [2024-07-14 09:44:31.653457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.249 [2024-07-14 09:44:31.653509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.249 qpair failed and we were unable to recover it. 00:34:47.249 [2024-07-14 09:44:31.653741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.249 [2024-07-14 09:44:31.653769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.249 qpair failed and we were unable to recover it. 00:34:47.249 [2024-07-14 09:44:31.653987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.249 [2024-07-14 09:44:31.654032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.249 qpair failed and we were unable to recover it. 00:34:47.249 [2024-07-14 09:44:31.654260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.249 [2024-07-14 09:44:31.654305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.249 qpair failed and we were unable to recover it. 00:34:47.249 [2024-07-14 09:44:31.654529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.249 [2024-07-14 09:44:31.654582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.249 qpair failed and we were unable to recover it. 00:34:47.249 [2024-07-14 09:44:31.654800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.249 [2024-07-14 09:44:31.654835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.249 qpair failed and we were unable to recover it. 00:34:47.249 [2024-07-14 09:44:31.655083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.249 [2024-07-14 09:44:31.655140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.249 qpair failed and we were unable to recover it. 00:34:47.249 [2024-07-14 09:44:31.655408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.249 [2024-07-14 09:44:31.655457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.249 qpair failed and we were unable to recover it. 00:34:47.249 [2024-07-14 09:44:31.655717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.249 [2024-07-14 09:44:31.655763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.249 qpair failed and we were unable to recover it. 00:34:47.249 [2024-07-14 09:44:31.655981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.249 [2024-07-14 09:44:31.656017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.249 qpair failed and we were unable to recover it. 00:34:47.249 [2024-07-14 09:44:31.656220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.249 [2024-07-14 09:44:31.656276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.249 qpair failed and we were unable to recover it. 00:34:47.249 [2024-07-14 09:44:31.656615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.249 [2024-07-14 09:44:31.656684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.249 qpair failed and we were unable to recover it. 00:34:47.249 [2024-07-14 09:44:31.656927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.249 [2024-07-14 09:44:31.656962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.249 qpair failed and we were unable to recover it. 00:34:47.249 [2024-07-14 09:44:31.657141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.249 [2024-07-14 09:44:31.657174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.249 qpair failed and we were unable to recover it. 00:34:47.249 [2024-07-14 09:44:31.657435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.249 [2024-07-14 09:44:31.657483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.249 qpair failed and we were unable to recover it. 00:34:47.249 [2024-07-14 09:44:31.657768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.249 [2024-07-14 09:44:31.657815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.249 qpair failed and we were unable to recover it. 00:34:47.249 [2024-07-14 09:44:31.658031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.249 [2024-07-14 09:44:31.658082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.249 qpair failed and we were unable to recover it. 00:34:47.249 [2024-07-14 09:44:31.658341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.249 [2024-07-14 09:44:31.658394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.249 qpair failed and we were unable to recover it. 00:34:47.249 [2024-07-14 09:44:31.658616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.249 [2024-07-14 09:44:31.658663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.249 qpair failed and we were unable to recover it. 00:34:47.249 [2024-07-14 09:44:31.658858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.249 [2024-07-14 09:44:31.658905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.249 qpair failed and we were unable to recover it. 00:34:47.249 [2024-07-14 09:44:31.659106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.249 [2024-07-14 09:44:31.659148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.249 qpair failed and we were unable to recover it. 00:34:47.249 [2024-07-14 09:44:31.659404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.249 [2024-07-14 09:44:31.659445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.249 qpair failed and we were unable to recover it. 00:34:47.249 [2024-07-14 09:44:31.659731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.249 [2024-07-14 09:44:31.659780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.249 qpair failed and we were unable to recover it. 00:34:47.249 [2024-07-14 09:44:31.659979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.249 [2024-07-14 09:44:31.660015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.249 qpair failed and we were unable to recover it. 00:34:47.249 [2024-07-14 09:44:31.660266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.249 [2024-07-14 09:44:31.660295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.249 qpair failed and we were unable to recover it. 00:34:47.249 [2024-07-14 09:44:31.660539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.249 [2024-07-14 09:44:31.660588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.249 qpair failed and we were unable to recover it. 00:34:47.249 [2024-07-14 09:44:31.660759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.249 [2024-07-14 09:44:31.660787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.249 qpair failed and we were unable to recover it. 00:34:47.249 [2024-07-14 09:44:31.661043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.249 [2024-07-14 09:44:31.661091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.249 qpair failed and we were unable to recover it. 00:34:47.249 [2024-07-14 09:44:31.661329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.249 [2024-07-14 09:44:31.661382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.249 qpair failed and we were unable to recover it. 00:34:47.249 [2024-07-14 09:44:31.661642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.249 [2024-07-14 09:44:31.661694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.249 qpair failed and we were unable to recover it. 00:34:47.249 [2024-07-14 09:44:31.661917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.249 [2024-07-14 09:44:31.661973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.249 qpair failed and we were unable to recover it. 00:34:47.249 [2024-07-14 09:44:31.662147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10045b0 is same with the state(5) to be set 00:34:47.249 [2024-07-14 09:44:31.662439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.249 [2024-07-14 09:44:31.662498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.249 qpair failed and we were unable to recover it. 00:34:47.249 [2024-07-14 09:44:31.662736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.249 [2024-07-14 09:44:31.662770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.249 qpair failed and we were unable to recover it. 00:34:47.249 [2024-07-14 09:44:31.662989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.249 [2024-07-14 09:44:31.663040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.249 qpair failed and we were unable to recover it. 00:34:47.249 [2024-07-14 09:44:31.663329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.249 [2024-07-14 09:44:31.663374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.249 qpair failed and we were unable to recover it. 00:34:47.249 [2024-07-14 09:44:31.663595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.249 [2024-07-14 09:44:31.663626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.249 qpair failed and we were unable to recover it. 00:34:47.527 [2024-07-14 09:44:31.663932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.527 [2024-07-14 09:44:31.663966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.527 qpair failed and we were unable to recover it. 00:34:47.527 [2024-07-14 09:44:31.664172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.527 [2024-07-14 09:44:31.664216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.527 qpair failed and we were unable to recover it. 00:34:47.527 [2024-07-14 09:44:31.664445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.527 [2024-07-14 09:44:31.664476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.527 qpair failed and we were unable to recover it. 00:34:47.527 [2024-07-14 09:44:31.664658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.527 [2024-07-14 09:44:31.664690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.527 qpair failed and we were unable to recover it. 00:34:47.527 [2024-07-14 09:44:31.664877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.527 [2024-07-14 09:44:31.664917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.527 qpair failed and we were unable to recover it. 00:34:47.527 [2024-07-14 09:44:31.665141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.527 [2024-07-14 09:44:31.665172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.527 qpair failed and we were unable to recover it. 00:34:47.527 [2024-07-14 09:44:31.665378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.527 [2024-07-14 09:44:31.665409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.527 qpair failed and we were unable to recover it. 00:34:47.527 [2024-07-14 09:44:31.665743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.527 [2024-07-14 09:44:31.665798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.527 qpair failed and we were unable to recover it. 00:34:47.527 [2024-07-14 09:44:31.665993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.527 [2024-07-14 09:44:31.666021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.527 qpair failed and we were unable to recover it. 00:34:47.527 [2024-07-14 09:44:31.666213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.527 [2024-07-14 09:44:31.666241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.527 qpair failed and we were unable to recover it. 00:34:47.527 [2024-07-14 09:44:31.666457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.527 [2024-07-14 09:44:31.666489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.527 qpair failed and we were unable to recover it. 00:34:47.527 [2024-07-14 09:44:31.666675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.527 [2024-07-14 09:44:31.666706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.527 qpair failed and we were unable to recover it. 00:34:47.527 [2024-07-14 09:44:31.666933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.528 [2024-07-14 09:44:31.666962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.528 qpair failed and we were unable to recover it. 00:34:47.528 [2024-07-14 09:44:31.667164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.528 [2024-07-14 09:44:31.667192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.528 qpair failed and we were unable to recover it. 00:34:47.528 [2024-07-14 09:44:31.667395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.528 [2024-07-14 09:44:31.667423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.528 qpair failed and we were unable to recover it. 00:34:47.528 [2024-07-14 09:44:31.667589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.528 [2024-07-14 09:44:31.667617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.528 qpair failed and we were unable to recover it. 00:34:47.528 [2024-07-14 09:44:31.667820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.528 [2024-07-14 09:44:31.667848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.528 qpair failed and we were unable to recover it. 00:34:47.528 [2024-07-14 09:44:31.668047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.528 [2024-07-14 09:44:31.668075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.528 qpair failed and we were unable to recover it. 00:34:47.528 [2024-07-14 09:44:31.668267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.528 [2024-07-14 09:44:31.668311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.528 qpair failed and we were unable to recover it. 00:34:47.528 [2024-07-14 09:44:31.668587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.528 [2024-07-14 09:44:31.668618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.528 qpair failed and we were unable to recover it. 00:34:47.528 [2024-07-14 09:44:31.668852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.528 [2024-07-14 09:44:31.668889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.528 qpair failed and we were unable to recover it. 00:34:47.528 [2024-07-14 09:44:31.669068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.528 [2024-07-14 09:44:31.669096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.528 qpair failed and we were unable to recover it. 00:34:47.528 [2024-07-14 09:44:31.669332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.528 [2024-07-14 09:44:31.669378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.528 qpair failed and we were unable to recover it. 00:34:47.528 [2024-07-14 09:44:31.669580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.528 [2024-07-14 09:44:31.669611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.528 qpair failed and we were unable to recover it. 00:34:47.528 [2024-07-14 09:44:31.669844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.528 [2024-07-14 09:44:31.669883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.528 qpair failed and we were unable to recover it. 00:34:47.528 [2024-07-14 09:44:31.670128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.528 [2024-07-14 09:44:31.670156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.528 qpair failed and we were unable to recover it. 00:34:47.528 [2024-07-14 09:44:31.670402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.528 [2024-07-14 09:44:31.670433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.528 qpair failed and we were unable to recover it. 00:34:47.528 [2024-07-14 09:44:31.670676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.528 [2024-07-14 09:44:31.670706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.528 qpair failed and we were unable to recover it. 00:34:47.528 [2024-07-14 09:44:31.670953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.528 [2024-07-14 09:44:31.670983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.528 qpair failed and we were unable to recover it. 00:34:47.528 [2024-07-14 09:44:31.671196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.528 [2024-07-14 09:44:31.671224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.528 qpair failed and we were unable to recover it. 00:34:47.528 [2024-07-14 09:44:31.671445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.528 [2024-07-14 09:44:31.671473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.528 qpair failed and we were unable to recover it. 00:34:47.528 [2024-07-14 09:44:31.671749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.528 [2024-07-14 09:44:31.671799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.528 qpair failed and we were unable to recover it. 00:34:47.528 [2024-07-14 09:44:31.672015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.528 [2024-07-14 09:44:31.672047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.528 qpair failed and we were unable to recover it. 00:34:47.528 [2024-07-14 09:44:31.672272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.528 [2024-07-14 09:44:31.672304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.528 qpair failed and we were unable to recover it. 00:34:47.528 [2024-07-14 09:44:31.672486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.528 [2024-07-14 09:44:31.672517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.528 qpair failed and we were unable to recover it. 00:34:47.528 [2024-07-14 09:44:31.672730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.528 [2024-07-14 09:44:31.672761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.528 qpair failed and we were unable to recover it. 00:34:47.528 [2024-07-14 09:44:31.672972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.528 [2024-07-14 09:44:31.673001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.528 qpair failed and we were unable to recover it. 00:34:47.528 [2024-07-14 09:44:31.673225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.528 [2024-07-14 09:44:31.673254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.528 qpair failed and we were unable to recover it. 00:34:47.528 [2024-07-14 09:44:31.673473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.528 [2024-07-14 09:44:31.673504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.528 qpair failed and we were unable to recover it. 00:34:47.528 [2024-07-14 09:44:31.673723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.528 [2024-07-14 09:44:31.673756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.528 qpair failed and we were unable to recover it. 00:34:47.528 [2024-07-14 09:44:31.673979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.528 [2024-07-14 09:44:31.674012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.528 qpair failed and we were unable to recover it. 00:34:47.528 [2024-07-14 09:44:31.674219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.528 [2024-07-14 09:44:31.674248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.528 qpair failed and we were unable to recover it. 00:34:47.528 [2024-07-14 09:44:31.674437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.528 [2024-07-14 09:44:31.674466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.528 qpair failed and we were unable to recover it. 00:34:47.528 [2024-07-14 09:44:31.674686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.528 [2024-07-14 09:44:31.674717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.528 qpair failed and we were unable to recover it. 00:34:47.528 [2024-07-14 09:44:31.674938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.528 [2024-07-14 09:44:31.674969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.528 qpair failed and we were unable to recover it. 00:34:47.528 [2024-07-14 09:44:31.675152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.528 [2024-07-14 09:44:31.675181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.528 qpair failed and we were unable to recover it. 00:34:47.528 [2024-07-14 09:44:31.675366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.528 [2024-07-14 09:44:31.675398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.528 qpair failed and we were unable to recover it. 00:34:47.528 [2024-07-14 09:44:31.675612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.528 [2024-07-14 09:44:31.675643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.528 qpair failed and we were unable to recover it. 00:34:47.528 [2024-07-14 09:44:31.675857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.528 [2024-07-14 09:44:31.675901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.528 qpair failed and we were unable to recover it. 00:34:47.528 [2024-07-14 09:44:31.676093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.528 [2024-07-14 09:44:31.676122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.528 qpair failed and we were unable to recover it. 00:34:47.528 [2024-07-14 09:44:31.676287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.528 [2024-07-14 09:44:31.676315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.528 qpair failed and we were unable to recover it. 00:34:47.528 [2024-07-14 09:44:31.676529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.528 [2024-07-14 09:44:31.676557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.528 qpair failed and we were unable to recover it. 00:34:47.528 [2024-07-14 09:44:31.676764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.528 [2024-07-14 09:44:31.676795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.529 qpair failed and we were unable to recover it. 00:34:47.529 [2024-07-14 09:44:31.676990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.529 [2024-07-14 09:44:31.677018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.529 qpair failed and we were unable to recover it. 00:34:47.529 [2024-07-14 09:44:31.677221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.529 [2024-07-14 09:44:31.677250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.529 qpair failed and we were unable to recover it. 00:34:47.529 [2024-07-14 09:44:31.677444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.529 [2024-07-14 09:44:31.677473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.529 qpair failed and we were unable to recover it. 00:34:47.529 [2024-07-14 09:44:31.677662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.529 [2024-07-14 09:44:31.677690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.529 qpair failed and we were unable to recover it. 00:34:47.529 [2024-07-14 09:44:31.677909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.529 [2024-07-14 09:44:31.677937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.529 qpair failed and we were unable to recover it. 00:34:47.529 [2024-07-14 09:44:31.678124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.529 [2024-07-14 09:44:31.678153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.529 qpair failed and we were unable to recover it. 00:34:47.529 [2024-07-14 09:44:31.678319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.529 [2024-07-14 09:44:31.678348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.529 qpair failed and we were unable to recover it. 00:34:47.529 [2024-07-14 09:44:31.678506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.529 [2024-07-14 09:44:31.678535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.529 qpair failed and we were unable to recover it. 00:34:47.529 [2024-07-14 09:44:31.678734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.529 [2024-07-14 09:44:31.678761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.529 qpair failed and we were unable to recover it. 00:34:47.529 [2024-07-14 09:44:31.678950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.529 [2024-07-14 09:44:31.678980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.529 qpair failed and we were unable to recover it. 00:34:47.529 [2024-07-14 09:44:31.679170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.529 [2024-07-14 09:44:31.679199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.529 qpair failed and we were unable to recover it. 00:34:47.529 [2024-07-14 09:44:31.679362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.529 [2024-07-14 09:44:31.679391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.529 qpair failed and we were unable to recover it. 00:34:47.529 [2024-07-14 09:44:31.679558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.529 [2024-07-14 09:44:31.679586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.529 qpair failed and we were unable to recover it. 00:34:47.529 [2024-07-14 09:44:31.679777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.529 [2024-07-14 09:44:31.679805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:47.529 qpair failed and we were unable to recover it. 00:34:47.529 [2024-07-14 09:44:31.680024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.529 [2024-07-14 09:44:31.680067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.529 qpair failed and we were unable to recover it. 00:34:47.529 [2024-07-14 09:44:31.680235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.529 [2024-07-14 09:44:31.680265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.529 qpair failed and we were unable to recover it. 00:34:47.529 [2024-07-14 09:44:31.680470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.529 [2024-07-14 09:44:31.680516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.529 qpair failed and we were unable to recover it. 00:34:47.529 [2024-07-14 09:44:31.680727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.529 [2024-07-14 09:44:31.680772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.529 qpair failed and we were unable to recover it. 00:34:47.529 [2024-07-14 09:44:31.680994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.529 [2024-07-14 09:44:31.681022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.529 qpair failed and we were unable to recover it. 00:34:47.529 [2024-07-14 09:44:31.681221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.529 [2024-07-14 09:44:31.681250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.529 qpair failed and we were unable to recover it. 00:34:47.529 [2024-07-14 09:44:31.681472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.529 [2024-07-14 09:44:31.681520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.529 qpair failed and we were unable to recover it. 00:34:47.529 [2024-07-14 09:44:31.681751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.529 [2024-07-14 09:44:31.681796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.529 qpair failed and we were unable to recover it. 00:34:47.529 [2024-07-14 09:44:31.682002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.529 [2024-07-14 09:44:31.682031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.529 qpair failed and we were unable to recover it. 00:34:47.529 [2024-07-14 09:44:31.682249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.529 [2024-07-14 09:44:31.682295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.529 qpair failed and we were unable to recover it. 00:34:47.529 [2024-07-14 09:44:31.682542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.529 [2024-07-14 09:44:31.682590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.529 qpair failed and we were unable to recover it. 00:34:47.529 [2024-07-14 09:44:31.682754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.529 [2024-07-14 09:44:31.682783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.529 qpair failed and we were unable to recover it. 00:34:47.529 [2024-07-14 09:44:31.682965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.529 [2024-07-14 09:44:31.683000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.529 qpair failed and we were unable to recover it. 00:34:47.529 [2024-07-14 09:44:31.683211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.529 [2024-07-14 09:44:31.683270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.529 qpair failed and we were unable to recover it. 00:34:47.529 [2024-07-14 09:44:31.683506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.529 [2024-07-14 09:44:31.683553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.529 qpair failed and we were unable to recover it. 00:34:47.529 [2024-07-14 09:44:31.683749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.529 [2024-07-14 09:44:31.683778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.529 qpair failed and we were unable to recover it. 00:34:47.529 [2024-07-14 09:44:31.683974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.529 [2024-07-14 09:44:31.684003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.529 qpair failed and we were unable to recover it. 00:34:47.529 [2024-07-14 09:44:31.684227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.529 [2024-07-14 09:44:31.684273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.529 qpair failed and we were unable to recover it. 00:34:47.529 [2024-07-14 09:44:31.684531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.529 [2024-07-14 09:44:31.684588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.529 qpair failed and we were unable to recover it. 00:34:47.529 [2024-07-14 09:44:31.684792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.529 [2024-07-14 09:44:31.684826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.529 qpair failed and we were unable to recover it. 00:34:47.529 [2024-07-14 09:44:31.685025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.529 [2024-07-14 09:44:31.685056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.529 qpair failed and we were unable to recover it. 00:34:47.529 [2024-07-14 09:44:31.685285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.529 [2024-07-14 09:44:31.685341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.529 qpair failed and we were unable to recover it. 00:34:47.529 [2024-07-14 09:44:31.685591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.529 [2024-07-14 09:44:31.685636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.529 qpair failed and we were unable to recover it. 00:34:47.529 [2024-07-14 09:44:31.685844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.529 [2024-07-14 09:44:31.685882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.529 qpair failed and we were unable to recover it. 00:34:47.529 [2024-07-14 09:44:31.686120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.529 [2024-07-14 09:44:31.686148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.529 qpair failed and we were unable to recover it. 00:34:47.529 [2024-07-14 09:44:31.686359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.529 [2024-07-14 09:44:31.686405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.529 qpair failed and we were unable to recover it. 00:34:47.529 [2024-07-14 09:44:31.686602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.530 [2024-07-14 09:44:31.686649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.530 qpair failed and we were unable to recover it. 00:34:47.530 [2024-07-14 09:44:31.686850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.530 [2024-07-14 09:44:31.686898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.530 qpair failed and we were unable to recover it. 00:34:47.530 [2024-07-14 09:44:31.687126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.530 [2024-07-14 09:44:31.687156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.530 qpair failed and we were unable to recover it. 00:34:47.530 [2024-07-14 09:44:31.687409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.530 [2024-07-14 09:44:31.687455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.530 qpair failed and we were unable to recover it. 00:34:47.530 [2024-07-14 09:44:31.687728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.530 [2024-07-14 09:44:31.687779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.530 qpair failed and we were unable to recover it. 00:34:47.530 [2024-07-14 09:44:31.687973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.530 [2024-07-14 09:44:31.688009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.530 qpair failed and we were unable to recover it. 00:34:47.530 [2024-07-14 09:44:31.688222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.530 [2024-07-14 09:44:31.688267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.530 qpair failed and we were unable to recover it. 00:34:47.530 [2024-07-14 09:44:31.688546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.530 [2024-07-14 09:44:31.688605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.530 qpair failed and we were unable to recover it. 00:34:47.530 [2024-07-14 09:44:31.688830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.530 [2024-07-14 09:44:31.688858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.530 qpair failed and we were unable to recover it. 00:34:47.530 [2024-07-14 09:44:31.689061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.530 [2024-07-14 09:44:31.689091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.530 qpair failed and we were unable to recover it. 00:34:47.530 [2024-07-14 09:44:31.689340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.530 [2024-07-14 09:44:31.689386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.530 qpair failed and we were unable to recover it. 00:34:47.530 [2024-07-14 09:44:31.689640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.530 [2024-07-14 09:44:31.689689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.530 qpair failed and we were unable to recover it. 00:34:47.530 [2024-07-14 09:44:31.689916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.530 [2024-07-14 09:44:31.689945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.530 qpair failed and we were unable to recover it. 00:34:47.530 [2024-07-14 09:44:31.690135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.530 [2024-07-14 09:44:31.690179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.530 qpair failed and we were unable to recover it. 00:34:47.530 [2024-07-14 09:44:31.690410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.530 [2024-07-14 09:44:31.690457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.530 qpair failed and we were unable to recover it. 00:34:47.530 [2024-07-14 09:44:31.690694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.530 [2024-07-14 09:44:31.690740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.530 qpair failed and we were unable to recover it. 00:34:47.530 [2024-07-14 09:44:31.690959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.530 [2024-07-14 09:44:31.691003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.530 qpair failed and we were unable to recover it. 00:34:47.530 [2024-07-14 09:44:31.691221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.530 [2024-07-14 09:44:31.691265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.530 qpair failed and we were unable to recover it. 00:34:47.530 [2024-07-14 09:44:31.691484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.530 [2024-07-14 09:44:31.691538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.530 qpair failed and we were unable to recover it. 00:34:47.530 [2024-07-14 09:44:31.691763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.530 [2024-07-14 09:44:31.691792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.530 qpair failed and we were unable to recover it. 00:34:47.530 [2024-07-14 09:44:31.692012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.530 [2024-07-14 09:44:31.692058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.530 qpair failed and we were unable to recover it. 00:34:47.530 [2024-07-14 09:44:31.692257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.530 [2024-07-14 09:44:31.692302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.530 qpair failed and we were unable to recover it. 00:34:47.530 [2024-07-14 09:44:31.692544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.530 [2024-07-14 09:44:31.692596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.530 qpair failed and we were unable to recover it. 00:34:47.530 [2024-07-14 09:44:31.692772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.530 [2024-07-14 09:44:31.692802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.530 qpair failed and we were unable to recover it. 00:34:47.530 [2024-07-14 09:44:31.693016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.530 [2024-07-14 09:44:31.693063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.530 qpair failed and we were unable to recover it. 00:34:47.530 [2024-07-14 09:44:31.693323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.530 [2024-07-14 09:44:31.693371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.530 qpair failed and we were unable to recover it. 00:34:47.530 [2024-07-14 09:44:31.693567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.530 [2024-07-14 09:44:31.693613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.530 qpair failed and we were unable to recover it. 00:34:47.530 [2024-07-14 09:44:31.693782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.530 [2024-07-14 09:44:31.693816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.530 qpair failed and we were unable to recover it. 00:34:47.530 [2024-07-14 09:44:31.694022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.530 [2024-07-14 09:44:31.694067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.530 qpair failed and we were unable to recover it. 00:34:47.530 [2024-07-14 09:44:31.694288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.530 [2024-07-14 09:44:31.694334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.530 qpair failed and we were unable to recover it. 00:34:47.530 [2024-07-14 09:44:31.694551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.530 [2024-07-14 09:44:31.694598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.530 qpair failed and we were unable to recover it. 00:34:47.530 [2024-07-14 09:44:31.694758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.530 [2024-07-14 09:44:31.694787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.530 qpair failed and we were unable to recover it. 00:34:47.530 [2024-07-14 09:44:31.694969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.530 [2024-07-14 09:44:31.695014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.530 qpair failed and we were unable to recover it. 00:34:47.530 [2024-07-14 09:44:31.695239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.530 [2024-07-14 09:44:31.695296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.530 qpair failed and we were unable to recover it. 00:34:47.530 [2024-07-14 09:44:31.695721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.530 [2024-07-14 09:44:31.695781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.530 qpair failed and we were unable to recover it. 00:34:47.530 [2024-07-14 09:44:31.696028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.530 [2024-07-14 09:44:31.696074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.530 qpair failed and we were unable to recover it. 00:34:47.530 [2024-07-14 09:44:31.696332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.530 [2024-07-14 09:44:31.696380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.530 qpair failed and we were unable to recover it. 00:34:47.530 [2024-07-14 09:44:31.696668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.530 [2024-07-14 09:44:31.696725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.530 qpair failed and we were unable to recover it. 00:34:47.530 [2024-07-14 09:44:31.696944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.530 [2024-07-14 09:44:31.696990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.530 qpair failed and we were unable to recover it. 00:34:47.530 [2024-07-14 09:44:31.697225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.530 [2024-07-14 09:44:31.697272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.530 qpair failed and we were unable to recover it. 00:34:47.530 [2024-07-14 09:44:31.697480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.530 [2024-07-14 09:44:31.697525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.530 qpair failed and we were unable to recover it. 00:34:47.530 [2024-07-14 09:44:31.697725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.531 [2024-07-14 09:44:31.697754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.531 qpair failed and we were unable to recover it. 00:34:47.531 [2024-07-14 09:44:31.697973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.531 [2024-07-14 09:44:31.698019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.531 qpair failed and we were unable to recover it. 00:34:47.531 [2024-07-14 09:44:31.698235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.531 [2024-07-14 09:44:31.698281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.531 qpair failed and we were unable to recover it. 00:34:47.531 [2024-07-14 09:44:31.698500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.531 [2024-07-14 09:44:31.698547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.531 qpair failed and we were unable to recover it. 00:34:47.531 [2024-07-14 09:44:31.698747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.531 [2024-07-14 09:44:31.698782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.531 qpair failed and we were unable to recover it. 00:34:47.531 [2024-07-14 09:44:31.698996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.531 [2024-07-14 09:44:31.699043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.531 qpair failed and we were unable to recover it. 00:34:47.531 [2024-07-14 09:44:31.699269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.531 [2024-07-14 09:44:31.699319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.531 qpair failed and we were unable to recover it. 00:34:47.531 [2024-07-14 09:44:31.699516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.531 [2024-07-14 09:44:31.699561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.531 qpair failed and we were unable to recover it. 00:34:47.531 [2024-07-14 09:44:31.699755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.531 [2024-07-14 09:44:31.699784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.531 qpair failed and we were unable to recover it. 00:34:47.531 [2024-07-14 09:44:31.700028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.531 [2024-07-14 09:44:31.700075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.531 qpair failed and we were unable to recover it. 00:34:47.531 [2024-07-14 09:44:31.700301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.531 [2024-07-14 09:44:31.700346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.531 qpair failed and we were unable to recover it. 00:34:47.531 [2024-07-14 09:44:31.700551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.531 [2024-07-14 09:44:31.700600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.531 qpair failed and we were unable to recover it. 00:34:47.531 [2024-07-14 09:44:31.700792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.531 [2024-07-14 09:44:31.700820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.531 qpair failed and we were unable to recover it. 00:34:47.531 [2024-07-14 09:44:31.701042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.531 [2024-07-14 09:44:31.701091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.531 qpair failed and we were unable to recover it. 00:34:47.531 [2024-07-14 09:44:31.701297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.531 [2024-07-14 09:44:31.701343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.531 qpair failed and we were unable to recover it. 00:34:47.531 [2024-07-14 09:44:31.701558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.531 [2024-07-14 09:44:31.701614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.531 qpair failed and we were unable to recover it. 00:34:47.531 [2024-07-14 09:44:31.701847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.531 [2024-07-14 09:44:31.701882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.531 qpair failed and we were unable to recover it. 00:34:47.531 [2024-07-14 09:44:31.702102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.531 [2024-07-14 09:44:31.702148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.531 qpair failed and we were unable to recover it. 00:34:47.531 [2024-07-14 09:44:31.702338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.531 [2024-07-14 09:44:31.702384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.531 qpair failed and we were unable to recover it. 00:34:47.531 [2024-07-14 09:44:31.702608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.531 [2024-07-14 09:44:31.702653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.531 qpair failed and we were unable to recover it. 00:34:47.531 [2024-07-14 09:44:31.702875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.531 [2024-07-14 09:44:31.702905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.531 qpair failed and we were unable to recover it. 00:34:47.531 [2024-07-14 09:44:31.703087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.531 [2024-07-14 09:44:31.703119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.531 qpair failed and we were unable to recover it. 00:34:47.531 [2024-07-14 09:44:31.703336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.531 [2024-07-14 09:44:31.703384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.531 qpair failed and we were unable to recover it. 00:34:47.531 [2024-07-14 09:44:31.703610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.531 [2024-07-14 09:44:31.703665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.531 qpair failed and we were unable to recover it. 00:34:47.531 [2024-07-14 09:44:31.703890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.531 [2024-07-14 09:44:31.703920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.531 qpair failed and we were unable to recover it. 00:34:47.531 [2024-07-14 09:44:31.704142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.531 [2024-07-14 09:44:31.704181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.531 qpair failed and we were unable to recover it. 00:34:47.531 [2024-07-14 09:44:31.704460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.531 [2024-07-14 09:44:31.704515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.531 qpair failed and we were unable to recover it. 00:34:47.531 [2024-07-14 09:44:31.704768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.531 [2024-07-14 09:44:31.704816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.531 qpair failed and we were unable to recover it. 00:34:47.531 [2024-07-14 09:44:31.705025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.531 [2024-07-14 09:44:31.705055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.531 qpair failed and we were unable to recover it. 00:34:47.531 [2024-07-14 09:44:31.705290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.531 [2024-07-14 09:44:31.705337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.531 qpair failed and we were unable to recover it. 00:34:47.531 [2024-07-14 09:44:31.705556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.531 [2024-07-14 09:44:31.705602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.531 qpair failed and we were unable to recover it. 00:34:47.531 [2024-07-14 09:44:31.705832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.531 [2024-07-14 09:44:31.705863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.531 qpair failed and we were unable to recover it. 00:34:47.531 [2024-07-14 09:44:31.706088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.531 [2024-07-14 09:44:31.706133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.531 qpair failed and we were unable to recover it. 00:34:47.531 [2024-07-14 09:44:31.706380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.531 [2024-07-14 09:44:31.706427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.531 qpair failed and we were unable to recover it. 00:34:47.531 [2024-07-14 09:44:31.706649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.531 [2024-07-14 09:44:31.706696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.531 qpair failed and we were unable to recover it. 00:34:47.532 [2024-07-14 09:44:31.706882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.532 [2024-07-14 09:44:31.706912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.532 qpair failed and we were unable to recover it. 00:34:47.532 [2024-07-14 09:44:31.707112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.532 [2024-07-14 09:44:31.707152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.532 qpair failed and we were unable to recover it. 00:34:47.532 [2024-07-14 09:44:31.707376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.532 [2024-07-14 09:44:31.707420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.532 qpair failed and we were unable to recover it. 00:34:47.532 [2024-07-14 09:44:31.707641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.532 [2024-07-14 09:44:31.707688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.532 qpair failed and we were unable to recover it. 00:34:47.532 [2024-07-14 09:44:31.707930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.532 [2024-07-14 09:44:31.707983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.532 qpair failed and we were unable to recover it. 00:34:47.532 [2024-07-14 09:44:31.708233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.532 [2024-07-14 09:44:31.708279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.532 qpair failed and we were unable to recover it. 00:34:47.532 [2024-07-14 09:44:31.708530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.532 [2024-07-14 09:44:31.708577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.532 qpair failed and we were unable to recover it. 00:34:47.532 [2024-07-14 09:44:31.708786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.532 [2024-07-14 09:44:31.708815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.532 qpair failed and we were unable to recover it. 00:34:47.532 [2024-07-14 09:44:31.709034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.532 [2024-07-14 09:44:31.709063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.532 qpair failed and we were unable to recover it. 00:34:47.532 [2024-07-14 09:44:31.709287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.532 [2024-07-14 09:44:31.709336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.532 qpair failed and we were unable to recover it. 00:34:47.532 [2024-07-14 09:44:31.709608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.532 [2024-07-14 09:44:31.709655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.532 qpair failed and we were unable to recover it. 00:34:47.532 [2024-07-14 09:44:31.709884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.532 [2024-07-14 09:44:31.709914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.532 qpair failed and we were unable to recover it. 00:34:47.532 [2024-07-14 09:44:31.710115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.532 [2024-07-14 09:44:31.710162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.532 qpair failed and we were unable to recover it. 00:34:47.532 [2024-07-14 09:44:31.710419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.532 [2024-07-14 09:44:31.710467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.532 qpair failed and we were unable to recover it. 00:34:47.532 [2024-07-14 09:44:31.710715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.532 [2024-07-14 09:44:31.710761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.532 qpair failed and we were unable to recover it. 00:34:47.532 [2024-07-14 09:44:31.710965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.532 [2024-07-14 09:44:31.710999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.532 qpair failed and we were unable to recover it. 00:34:47.532 [2024-07-14 09:44:31.711260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.532 [2024-07-14 09:44:31.711314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.532 qpair failed and we were unable to recover it. 00:34:47.532 [2024-07-14 09:44:31.711578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.532 [2024-07-14 09:44:31.711628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.532 qpair failed and we were unable to recover it. 00:34:47.532 [2024-07-14 09:44:31.711834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.532 [2024-07-14 09:44:31.711864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.532 qpair failed and we were unable to recover it. 00:34:47.532 [2024-07-14 09:44:31.712076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.532 [2024-07-14 09:44:31.712106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.532 qpair failed and we were unable to recover it. 00:34:47.532 [2024-07-14 09:44:31.712353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.532 [2024-07-14 09:44:31.712400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.532 qpair failed and we were unable to recover it. 00:34:47.532 [2024-07-14 09:44:31.712637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.532 [2024-07-14 09:44:31.712683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.532 qpair failed and we were unable to recover it. 00:34:47.532 [2024-07-14 09:44:31.712881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.532 [2024-07-14 09:44:31.712911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.532 qpair failed and we were unable to recover it. 00:34:47.532 [2024-07-14 09:44:31.713113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.532 [2024-07-14 09:44:31.713152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.532 qpair failed and we were unable to recover it. 00:34:47.532 [2024-07-14 09:44:31.713378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.532 [2024-07-14 09:44:31.713428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.532 qpair failed and we were unable to recover it. 00:34:47.532 [2024-07-14 09:44:31.713654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.532 [2024-07-14 09:44:31.713700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.532 qpair failed and we were unable to recover it. 00:34:47.532 [2024-07-14 09:44:31.713896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.532 [2024-07-14 09:44:31.713925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.532 qpair failed and we were unable to recover it. 00:34:47.532 [2024-07-14 09:44:31.714124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.532 [2024-07-14 09:44:31.714159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.532 qpair failed and we were unable to recover it. 00:34:47.532 [2024-07-14 09:44:31.714418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.532 [2024-07-14 09:44:31.714467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.532 qpair failed and we were unable to recover it. 00:34:47.532 [2024-07-14 09:44:31.714695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.532 [2024-07-14 09:44:31.714752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.532 qpair failed and we were unable to recover it. 00:34:47.532 [2024-07-14 09:44:31.714945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.532 [2024-07-14 09:44:31.714974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.532 qpair failed and we were unable to recover it. 00:34:47.532 [2024-07-14 09:44:31.715179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.532 [2024-07-14 09:44:31.715230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.532 qpair failed and we were unable to recover it. 00:34:47.532 [2024-07-14 09:44:31.715461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.532 [2024-07-14 09:44:31.715507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.532 qpair failed and we were unable to recover it. 00:34:47.532 [2024-07-14 09:44:31.715699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.532 [2024-07-14 09:44:31.715744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.532 qpair failed and we were unable to recover it. 00:34:47.532 [2024-07-14 09:44:31.715952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.532 [2024-07-14 09:44:31.715981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.532 qpair failed and we were unable to recover it. 00:34:47.532 [2024-07-14 09:44:31.716181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.532 [2024-07-14 09:44:31.716227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.532 qpair failed and we were unable to recover it. 00:34:47.533 [2024-07-14 09:44:31.716459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.533 [2024-07-14 09:44:31.716505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.533 qpair failed and we were unable to recover it. 00:34:47.533 [2024-07-14 09:44:31.716701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.533 [2024-07-14 09:44:31.716731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.533 qpair failed and we were unable to recover it. 00:34:47.533 [2024-07-14 09:44:31.716951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.533 [2024-07-14 09:44:31.716998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.533 qpair failed and we were unable to recover it. 00:34:47.533 [2024-07-14 09:44:31.717199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.533 [2024-07-14 09:44:31.717256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.533 qpair failed and we were unable to recover it. 00:34:47.533 [2024-07-14 09:44:31.717499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.533 [2024-07-14 09:44:31.717545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.533 qpair failed and we were unable to recover it. 00:34:47.533 [2024-07-14 09:44:31.717734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.533 [2024-07-14 09:44:31.717763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.533 qpair failed and we were unable to recover it. 00:34:47.533 [2024-07-14 09:44:31.717989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.533 [2024-07-14 09:44:31.718035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.533 qpair failed and we were unable to recover it. 00:34:47.533 [2024-07-14 09:44:31.718253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.533 [2024-07-14 09:44:31.718300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.533 qpair failed and we were unable to recover it. 00:34:47.533 [2024-07-14 09:44:31.718548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.533 [2024-07-14 09:44:31.718594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.533 qpair failed and we were unable to recover it. 00:34:47.533 [2024-07-14 09:44:31.718794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.533 [2024-07-14 09:44:31.718823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.533 qpair failed and we were unable to recover it. 00:34:47.533 [2024-07-14 09:44:31.719019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.533 [2024-07-14 09:44:31.719070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.533 qpair failed and we were unable to recover it. 00:34:47.533 [2024-07-14 09:44:31.719330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.533 [2024-07-14 09:44:31.719382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.533 qpair failed and we were unable to recover it. 00:34:47.533 [2024-07-14 09:44:31.719660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.533 [2024-07-14 09:44:31.719707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.533 qpair failed and we were unable to recover it. 00:34:47.533 [2024-07-14 09:44:31.719881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.533 [2024-07-14 09:44:31.719911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.533 qpair failed and we were unable to recover it. 00:34:47.533 [2024-07-14 09:44:31.720099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.533 [2024-07-14 09:44:31.720156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.533 qpair failed and we were unable to recover it. 00:34:47.533 [2024-07-14 09:44:31.720391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.533 [2024-07-14 09:44:31.720437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.533 qpair failed and we were unable to recover it. 00:34:47.533 [2024-07-14 09:44:31.720647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.533 [2024-07-14 09:44:31.720691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.533 qpair failed and we were unable to recover it. 00:34:47.533 [2024-07-14 09:44:31.720889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.533 [2024-07-14 09:44:31.720918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.533 qpair failed and we were unable to recover it. 00:34:47.533 [2024-07-14 09:44:31.721158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.533 [2024-07-14 09:44:31.721203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.533 qpair failed and we were unable to recover it. 00:34:47.533 [2024-07-14 09:44:31.721417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.533 [2024-07-14 09:44:31.721464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.533 qpair failed and we were unable to recover it. 00:34:47.533 [2024-07-14 09:44:31.721689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.533 [2024-07-14 09:44:31.721746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.533 qpair failed and we were unable to recover it. 00:34:47.533 [2024-07-14 09:44:31.721940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.533 [2024-07-14 09:44:31.721997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.533 qpair failed and we were unable to recover it. 00:34:47.533 [2024-07-14 09:44:31.722220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.533 [2024-07-14 09:44:31.722266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.533 qpair failed and we were unable to recover it. 00:34:47.533 [2024-07-14 09:44:31.722512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.533 [2024-07-14 09:44:31.722543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.533 qpair failed and we were unable to recover it. 00:34:47.533 [2024-07-14 09:44:31.722766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.533 [2024-07-14 09:44:31.722796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.533 qpair failed and we were unable to recover it. 00:34:47.533 [2024-07-14 09:44:31.723001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.533 [2024-07-14 09:44:31.723030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.533 qpair failed and we were unable to recover it. 00:34:47.533 [2024-07-14 09:44:31.723251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.533 [2024-07-14 09:44:31.723282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.533 qpair failed and we were unable to recover it. 00:34:47.533 [2024-07-14 09:44:31.723513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.533 [2024-07-14 09:44:31.723543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.533 qpair failed and we were unable to recover it. 00:34:47.533 [2024-07-14 09:44:31.723815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.533 [2024-07-14 09:44:31.723844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.533 qpair failed and we were unable to recover it. 00:34:47.533 [2024-07-14 09:44:31.724043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.533 [2024-07-14 09:44:31.724071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.533 qpair failed and we were unable to recover it. 00:34:47.533 [2024-07-14 09:44:31.724265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.533 [2024-07-14 09:44:31.724292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.533 qpair failed and we were unable to recover it. 00:34:47.533 [2024-07-14 09:44:31.724513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.533 [2024-07-14 09:44:31.724543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.533 qpair failed and we were unable to recover it. 00:34:47.533 [2024-07-14 09:44:31.724719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.533 [2024-07-14 09:44:31.724752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.533 qpair failed and we were unable to recover it. 00:34:47.533 [2024-07-14 09:44:31.724980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.533 [2024-07-14 09:44:31.725007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.533 qpair failed and we were unable to recover it. 00:34:47.533 [2024-07-14 09:44:31.725188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.533 [2024-07-14 09:44:31.725218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.533 qpair failed and we were unable to recover it. 00:34:47.533 [2024-07-14 09:44:31.725419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.533 [2024-07-14 09:44:31.725449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.533 qpair failed and we were unable to recover it. 00:34:47.533 [2024-07-14 09:44:31.725671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.533 [2024-07-14 09:44:31.725700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.533 qpair failed and we were unable to recover it. 00:34:47.533 [2024-07-14 09:44:31.725906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.533 [2024-07-14 09:44:31.725933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.533 qpair failed and we were unable to recover it. 00:34:47.533 [2024-07-14 09:44:31.726093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.533 [2024-07-14 09:44:31.726120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.533 qpair failed and we were unable to recover it. 00:34:47.533 [2024-07-14 09:44:31.726293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.533 [2024-07-14 09:44:31.726320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.533 qpair failed and we were unable to recover it. 00:34:47.533 [2024-07-14 09:44:31.726556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.534 [2024-07-14 09:44:31.726586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.534 qpair failed and we were unable to recover it. 00:34:47.534 [2024-07-14 09:44:31.726767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.534 [2024-07-14 09:44:31.726797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.534 qpair failed and we were unable to recover it. 00:34:47.534 [2024-07-14 09:44:31.726994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.534 [2024-07-14 09:44:31.727022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.534 qpair failed and we were unable to recover it. 00:34:47.534 [2024-07-14 09:44:31.727188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.534 [2024-07-14 09:44:31.727215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.534 qpair failed and we were unable to recover it. 00:34:47.534 [2024-07-14 09:44:31.727402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.534 [2024-07-14 09:44:31.727429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.534 qpair failed and we were unable to recover it. 00:34:47.534 [2024-07-14 09:44:31.727619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.534 [2024-07-14 09:44:31.727646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.534 qpair failed and we were unable to recover it. 00:34:47.534 [2024-07-14 09:44:31.727887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.534 [2024-07-14 09:44:31.727933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.534 qpair failed and we were unable to recover it. 00:34:47.534 [2024-07-14 09:44:31.728143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.534 [2024-07-14 09:44:31.728170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.534 qpair failed and we were unable to recover it. 00:34:47.534 [2024-07-14 09:44:31.728360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.534 [2024-07-14 09:44:31.728387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.534 qpair failed and we were unable to recover it. 00:34:47.534 [2024-07-14 09:44:31.728587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.534 [2024-07-14 09:44:31.728622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.534 qpair failed and we were unable to recover it. 00:34:47.534 [2024-07-14 09:44:31.728793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.534 [2024-07-14 09:44:31.728822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.534 qpair failed and we were unable to recover it. 00:34:47.534 [2024-07-14 09:44:31.729029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.534 [2024-07-14 09:44:31.729057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.534 qpair failed and we were unable to recover it. 00:34:47.534 [2024-07-14 09:44:31.729230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.534 [2024-07-14 09:44:31.729258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.534 qpair failed and we were unable to recover it. 00:34:47.534 [2024-07-14 09:44:31.729449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.534 [2024-07-14 09:44:31.729476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.534 qpair failed and we were unable to recover it. 00:34:47.534 [2024-07-14 09:44:31.729631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.534 [2024-07-14 09:44:31.729658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.534 qpair failed and we were unable to recover it. 00:34:47.534 [2024-07-14 09:44:31.729878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.534 [2024-07-14 09:44:31.729923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.534 qpair failed and we were unable to recover it. 00:34:47.534 [2024-07-14 09:44:31.730115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.534 [2024-07-14 09:44:31.730142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.534 qpair failed and we were unable to recover it. 00:34:47.534 [2024-07-14 09:44:31.730304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.534 [2024-07-14 09:44:31.730331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.534 qpair failed and we were unable to recover it. 00:34:47.534 [2024-07-14 09:44:31.730548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.534 [2024-07-14 09:44:31.730577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.534 qpair failed and we were unable to recover it. 00:34:47.534 [2024-07-14 09:44:31.730786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.534 [2024-07-14 09:44:31.730815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.534 qpair failed and we were unable to recover it. 00:34:47.534 [2024-07-14 09:44:31.731008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.534 [2024-07-14 09:44:31.731036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.534 qpair failed and we were unable to recover it. 00:34:47.534 [2024-07-14 09:44:31.731204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.534 [2024-07-14 09:44:31.731232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.534 qpair failed and we were unable to recover it. 00:34:47.534 [2024-07-14 09:44:31.731415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.534 [2024-07-14 09:44:31.731444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.534 qpair failed and we were unable to recover it. 00:34:47.534 [2024-07-14 09:44:31.731619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.534 [2024-07-14 09:44:31.731648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.534 qpair failed and we were unable to recover it. 00:34:47.534 [2024-07-14 09:44:31.731834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.534 [2024-07-14 09:44:31.731861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.534 qpair failed and we were unable to recover it. 00:34:47.534 [2024-07-14 09:44:31.732026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.534 [2024-07-14 09:44:31.732053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.534 qpair failed and we were unable to recover it. 00:34:47.534 [2024-07-14 09:44:31.732270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.534 [2024-07-14 09:44:31.732298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.534 qpair failed and we were unable to recover it. 00:34:47.534 [2024-07-14 09:44:31.732494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.534 [2024-07-14 09:44:31.732537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.534 qpair failed and we were unable to recover it. 00:34:47.534 [2024-07-14 09:44:31.732708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.534 [2024-07-14 09:44:31.732737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.534 qpair failed and we were unable to recover it. 00:34:47.534 [2024-07-14 09:44:31.732943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.534 [2024-07-14 09:44:31.732971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.534 qpair failed and we were unable to recover it. 00:34:47.534 [2024-07-14 09:44:31.733124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.534 [2024-07-14 09:44:31.733151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.534 qpair failed and we were unable to recover it. 00:34:47.534 [2024-07-14 09:44:31.733341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.534 [2024-07-14 09:44:31.733368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.534 qpair failed and we were unable to recover it. 00:34:47.534 [2024-07-14 09:44:31.733576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.534 [2024-07-14 09:44:31.733603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.534 qpair failed and we were unable to recover it. 00:34:47.534 [2024-07-14 09:44:31.733761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.534 [2024-07-14 09:44:31.733789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.534 qpair failed and we were unable to recover it. 00:34:47.534 [2024-07-14 09:44:31.734017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.534 [2024-07-14 09:44:31.734044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.534 qpair failed and we were unable to recover it. 00:34:47.534 [2024-07-14 09:44:31.734204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.534 [2024-07-14 09:44:31.734231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.534 qpair failed and we were unable to recover it. 00:34:47.534 [2024-07-14 09:44:31.734462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.534 [2024-07-14 09:44:31.734492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.534 qpair failed and we were unable to recover it. 00:34:47.534 [2024-07-14 09:44:31.734685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.534 [2024-07-14 09:44:31.734713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.534 qpair failed and we were unable to recover it. 00:34:47.534 [2024-07-14 09:44:31.734886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.534 [2024-07-14 09:44:31.734929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.534 qpair failed and we were unable to recover it. 00:34:47.534 [2024-07-14 09:44:31.735143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.534 [2024-07-14 09:44:31.735170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.534 qpair failed and we were unable to recover it. 00:34:47.534 [2024-07-14 09:44:31.735438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.534 [2024-07-14 09:44:31.735468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.534 qpair failed and we were unable to recover it. 00:34:47.534 [2024-07-14 09:44:31.735703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.534 [2024-07-14 09:44:31.735732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.535 qpair failed and we were unable to recover it. 00:34:47.535 [2024-07-14 09:44:31.735964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.535 [2024-07-14 09:44:31.735992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.535 qpair failed and we were unable to recover it. 00:34:47.535 [2024-07-14 09:44:31.736173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.535 [2024-07-14 09:44:31.736211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.535 qpair failed and we were unable to recover it. 00:34:47.535 [2024-07-14 09:44:31.736449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.535 [2024-07-14 09:44:31.736477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.535 qpair failed and we were unable to recover it. 00:34:47.535 [2024-07-14 09:44:31.736719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.535 [2024-07-14 09:44:31.736747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.535 qpair failed and we were unable to recover it. 00:34:47.535 [2024-07-14 09:44:31.736954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.535 [2024-07-14 09:44:31.736982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.535 qpair failed and we were unable to recover it. 00:34:47.535 [2024-07-14 09:44:31.737174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.535 [2024-07-14 09:44:31.737205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.535 qpair failed and we were unable to recover it. 00:34:47.535 [2024-07-14 09:44:31.737405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.535 [2024-07-14 09:44:31.737442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.535 qpair failed and we were unable to recover it. 00:34:47.535 [2024-07-14 09:44:31.737639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.535 [2024-07-14 09:44:31.737667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.535 qpair failed and we were unable to recover it. 00:34:47.535 [2024-07-14 09:44:31.737860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.535 [2024-07-14 09:44:31.737915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.535 qpair failed and we were unable to recover it. 00:34:47.535 [2024-07-14 09:44:31.738126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.535 [2024-07-14 09:44:31.738154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.535 qpair failed and we were unable to recover it. 00:34:47.535 [2024-07-14 09:44:31.738398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.535 [2024-07-14 09:44:31.738425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.535 qpair failed and we were unable to recover it. 00:34:47.535 [2024-07-14 09:44:31.738645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.535 [2024-07-14 09:44:31.738672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.535 qpair failed and we were unable to recover it. 00:34:47.535 [2024-07-14 09:44:31.738854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.535 [2024-07-14 09:44:31.738892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.535 qpair failed and we were unable to recover it. 00:34:47.535 [2024-07-14 09:44:31.739063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.535 [2024-07-14 09:44:31.739091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.535 qpair failed and we were unable to recover it. 00:34:47.535 [2024-07-14 09:44:31.739295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.535 [2024-07-14 09:44:31.739322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.535 qpair failed and we were unable to recover it. 00:34:47.535 [2024-07-14 09:44:31.739530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.535 [2024-07-14 09:44:31.739558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.535 qpair failed and we were unable to recover it. 00:34:47.535 [2024-07-14 09:44:31.739795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.535 [2024-07-14 09:44:31.739822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.535 qpair failed and we were unable to recover it. 00:34:47.535 [2024-07-14 09:44:31.740021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.535 [2024-07-14 09:44:31.740054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.535 qpair failed and we were unable to recover it. 00:34:47.535 [2024-07-14 09:44:31.740248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.535 [2024-07-14 09:44:31.740275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.535 qpair failed and we were unable to recover it. 00:34:47.535 [2024-07-14 09:44:31.740459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.535 [2024-07-14 09:44:31.740488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.535 qpair failed and we were unable to recover it. 00:34:47.535 [2024-07-14 09:44:31.740663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.535 [2024-07-14 09:44:31.740692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.535 qpair failed and we were unable to recover it. 00:34:47.535 [2024-07-14 09:44:31.740909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.535 [2024-07-14 09:44:31.740938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.535 qpair failed and we were unable to recover it. 00:34:47.535 [2024-07-14 09:44:31.741147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.535 [2024-07-14 09:44:31.741191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.535 qpair failed and we were unable to recover it. 00:34:47.535 [2024-07-14 09:44:31.741425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.535 [2024-07-14 09:44:31.741455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.535 qpair failed and we were unable to recover it. 00:34:47.535 [2024-07-14 09:44:31.741691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.535 [2024-07-14 09:44:31.741717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.535 qpair failed and we were unable to recover it. 00:34:47.535 [2024-07-14 09:44:31.741964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.535 [2024-07-14 09:44:31.741994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.535 qpair failed and we were unable to recover it. 00:34:47.535 [2024-07-14 09:44:31.742249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.535 [2024-07-14 09:44:31.742276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.535 qpair failed and we were unable to recover it. 00:34:47.535 [2024-07-14 09:44:31.742495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.535 [2024-07-14 09:44:31.742523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.535 qpair failed and we were unable to recover it. 00:34:47.535 [2024-07-14 09:44:31.742744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.535 [2024-07-14 09:44:31.742774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.535 qpair failed and we were unable to recover it. 00:34:47.535 [2024-07-14 09:44:31.742978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.535 [2024-07-14 09:44:31.743006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.535 qpair failed and we were unable to recover it. 00:34:47.535 [2024-07-14 09:44:31.743191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.535 [2024-07-14 09:44:31.743218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.535 qpair failed and we were unable to recover it. 00:34:47.535 [2024-07-14 09:44:31.743397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.535 [2024-07-14 09:44:31.743427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.535 qpair failed and we were unable to recover it. 00:34:47.535 [2024-07-14 09:44:31.743614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.535 [2024-07-14 09:44:31.743644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.535 qpair failed and we were unable to recover it. 00:34:47.535 [2024-07-14 09:44:31.743830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.535 [2024-07-14 09:44:31.743857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.535 qpair failed and we were unable to recover it. 00:34:47.535 [2024-07-14 09:44:31.744041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.535 [2024-07-14 09:44:31.744071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.535 qpair failed and we were unable to recover it. 00:34:47.535 [2024-07-14 09:44:31.744256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.535 [2024-07-14 09:44:31.744290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.535 qpair failed and we were unable to recover it. 00:34:47.535 [2024-07-14 09:44:31.744474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.535 [2024-07-14 09:44:31.744501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.535 qpair failed and we were unable to recover it. 00:34:47.535 [2024-07-14 09:44:31.744697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.535 [2024-07-14 09:44:31.744724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.535 qpair failed and we were unable to recover it. 00:34:47.535 [2024-07-14 09:44:31.744939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.535 [2024-07-14 09:44:31.744970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.535 qpair failed and we were unable to recover it. 00:34:47.535 [2024-07-14 09:44:31.745211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.535 [2024-07-14 09:44:31.745238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.535 qpair failed and we were unable to recover it. 00:34:47.535 [2024-07-14 09:44:31.745422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.535 [2024-07-14 09:44:31.745450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.535 qpair failed and we were unable to recover it. 00:34:47.535 [2024-07-14 09:44:31.745637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.536 [2024-07-14 09:44:31.745664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.536 qpair failed and we were unable to recover it. 00:34:47.536 [2024-07-14 09:44:31.745890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.536 [2024-07-14 09:44:31.745918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.536 qpair failed and we were unable to recover it. 00:34:47.536 [2024-07-14 09:44:31.746136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.536 [2024-07-14 09:44:31.746164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.536 qpair failed and we were unable to recover it. 00:34:47.536 [2024-07-14 09:44:31.746399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.536 [2024-07-14 09:44:31.746426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.536 qpair failed and we were unable to recover it. 00:34:47.536 [2024-07-14 09:44:31.746644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.536 [2024-07-14 09:44:31.746671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.536 qpair failed and we were unable to recover it. 00:34:47.536 [2024-07-14 09:44:31.746908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.536 [2024-07-14 09:44:31.746936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.536 qpair failed and we were unable to recover it. 00:34:47.536 [2024-07-14 09:44:31.747179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.536 [2024-07-14 09:44:31.747206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.536 qpair failed and we were unable to recover it. 00:34:47.536 [2024-07-14 09:44:31.747426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.536 [2024-07-14 09:44:31.747453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.536 qpair failed and we were unable to recover it. 00:34:47.536 [2024-07-14 09:44:31.747696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.536 [2024-07-14 09:44:31.747726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.536 qpair failed and we were unable to recover it. 00:34:47.536 [2024-07-14 09:44:31.747937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.536 [2024-07-14 09:44:31.747968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.536 qpair failed and we were unable to recover it. 00:34:47.536 [2024-07-14 09:44:31.748185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.536 [2024-07-14 09:44:31.748212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.536 qpair failed and we were unable to recover it. 00:34:47.536 [2024-07-14 09:44:31.748446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.536 [2024-07-14 09:44:31.748474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.536 qpair failed and we were unable to recover it. 00:34:47.536 [2024-07-14 09:44:31.748655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.536 [2024-07-14 09:44:31.748682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.536 qpair failed and we were unable to recover it. 00:34:47.536 [2024-07-14 09:44:31.748881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.536 [2024-07-14 09:44:31.748908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.536 qpair failed and we were unable to recover it. 00:34:47.536 [2024-07-14 09:44:31.749161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.536 [2024-07-14 09:44:31.749189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.536 qpair failed and we were unable to recover it. 00:34:47.536 [2024-07-14 09:44:31.749386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.536 [2024-07-14 09:44:31.749414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.536 qpair failed and we were unable to recover it. 00:34:47.536 [2024-07-14 09:44:31.749591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.536 [2024-07-14 09:44:31.749618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.536 qpair failed and we were unable to recover it. 00:34:47.536 [2024-07-14 09:44:31.749837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.536 [2024-07-14 09:44:31.749874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.536 qpair failed and we were unable to recover it. 00:34:47.536 [2024-07-14 09:44:31.750087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.536 [2024-07-14 09:44:31.750117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.536 qpair failed and we were unable to recover it. 00:34:47.536 [2024-07-14 09:44:31.750322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.536 [2024-07-14 09:44:31.750349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.536 qpair failed and we were unable to recover it. 00:34:47.536 [2024-07-14 09:44:31.750526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.536 [2024-07-14 09:44:31.750556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.536 qpair failed and we were unable to recover it. 00:34:47.536 [2024-07-14 09:44:31.750803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.536 [2024-07-14 09:44:31.750830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.536 qpair failed and we were unable to recover it. 00:34:47.536 [2024-07-14 09:44:31.751030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.536 [2024-07-14 09:44:31.751058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.536 qpair failed and we were unable to recover it. 00:34:47.536 [2024-07-14 09:44:31.751285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.536 [2024-07-14 09:44:31.751316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.536 qpair failed and we were unable to recover it. 00:34:47.536 [2024-07-14 09:44:31.751527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.536 [2024-07-14 09:44:31.751558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.536 qpair failed and we were unable to recover it. 00:34:47.536 [2024-07-14 09:44:31.751786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.536 [2024-07-14 09:44:31.751813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.536 qpair failed and we were unable to recover it. 00:34:47.536 [2024-07-14 09:44:31.752016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.536 [2024-07-14 09:44:31.752044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.536 qpair failed and we were unable to recover it. 00:34:47.536 [2024-07-14 09:44:31.752291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.536 [2024-07-14 09:44:31.752319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.536 qpair failed and we were unable to recover it. 00:34:47.536 [2024-07-14 09:44:31.752521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.536 [2024-07-14 09:44:31.752548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.536 qpair failed and we were unable to recover it. 00:34:47.536 [2024-07-14 09:44:31.752767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.536 [2024-07-14 09:44:31.752797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.536 qpair failed and we were unable to recover it. 00:34:47.536 [2024-07-14 09:44:31.753045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.536 [2024-07-14 09:44:31.753073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.536 qpair failed and we were unable to recover it. 00:34:47.536 [2024-07-14 09:44:31.753295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.536 [2024-07-14 09:44:31.753323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.536 qpair failed and we were unable to recover it. 00:34:47.536 [2024-07-14 09:44:31.753570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.536 [2024-07-14 09:44:31.753597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.536 qpair failed and we were unable to recover it. 00:34:47.536 [2024-07-14 09:44:31.753837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.536 [2024-07-14 09:44:31.753886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.536 qpair failed and we were unable to recover it. 00:34:47.536 [2024-07-14 09:44:31.754103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.536 [2024-07-14 09:44:31.754130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.536 qpair failed and we were unable to recover it. 00:34:47.536 [2024-07-14 09:44:31.754359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.537 [2024-07-14 09:44:31.754393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.537 qpair failed and we were unable to recover it. 00:34:47.537 [2024-07-14 09:44:31.754641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.537 [2024-07-14 09:44:31.754671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.537 qpair failed and we were unable to recover it. 00:34:47.537 [2024-07-14 09:44:31.754908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.537 [2024-07-14 09:44:31.754936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.537 qpair failed and we were unable to recover it. 00:34:47.537 [2024-07-14 09:44:31.755136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.537 [2024-07-14 09:44:31.755164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.537 qpair failed and we were unable to recover it. 00:34:47.537 [2024-07-14 09:44:31.755356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.537 [2024-07-14 09:44:31.755384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.537 qpair failed and we were unable to recover it. 00:34:47.537 [2024-07-14 09:44:31.755590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.537 [2024-07-14 09:44:31.755617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.537 qpair failed and we were unable to recover it. 00:34:47.537 [2024-07-14 09:44:31.755817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.537 [2024-07-14 09:44:31.755844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.537 qpair failed and we were unable to recover it. 00:34:47.537 [2024-07-14 09:44:31.756063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.537 [2024-07-14 09:44:31.756093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.537 qpair failed and we were unable to recover it. 00:34:47.537 [2024-07-14 09:44:31.756308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.537 [2024-07-14 09:44:31.756335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.537 qpair failed and we were unable to recover it. 00:34:47.537 [2024-07-14 09:44:31.756550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.537 [2024-07-14 09:44:31.756580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.537 qpair failed and we were unable to recover it. 00:34:47.537 [2024-07-14 09:44:31.756786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.537 [2024-07-14 09:44:31.756813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.537 qpair failed and we were unable to recover it. 00:34:47.537 [2024-07-14 09:44:31.757029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.537 [2024-07-14 09:44:31.757056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.537 qpair failed and we were unable to recover it. 00:34:47.537 [2024-07-14 09:44:31.757268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.537 [2024-07-14 09:44:31.757298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.537 qpair failed and we were unable to recover it. 00:34:47.537 [2024-07-14 09:44:31.757530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.537 [2024-07-14 09:44:31.757560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.537 qpair failed and we were unable to recover it. 00:34:47.537 [2024-07-14 09:44:31.757761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.537 [2024-07-14 09:44:31.757789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.537 qpair failed and we were unable to recover it. 00:34:47.537 [2024-07-14 09:44:31.758004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.537 [2024-07-14 09:44:31.758035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.537 qpair failed and we were unable to recover it. 00:34:47.537 [2024-07-14 09:44:31.758234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.537 [2024-07-14 09:44:31.758264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.537 qpair failed and we were unable to recover it. 00:34:47.537 [2024-07-14 09:44:31.758476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.537 [2024-07-14 09:44:31.758503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.537 qpair failed and we were unable to recover it. 00:34:47.537 [2024-07-14 09:44:31.758688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.537 [2024-07-14 09:44:31.758715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.537 qpair failed and we were unable to recover it. 00:34:47.537 [2024-07-14 09:44:31.758931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.537 [2024-07-14 09:44:31.758962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.537 qpair failed and we were unable to recover it. 00:34:47.537 [2024-07-14 09:44:31.759169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.537 [2024-07-14 09:44:31.759196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.537 qpair failed and we were unable to recover it. 00:34:47.537 [2024-07-14 09:44:31.759435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.537 [2024-07-14 09:44:31.759465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.537 qpair failed and we were unable to recover it. 00:34:47.537 [2024-07-14 09:44:31.759676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.537 [2024-07-14 09:44:31.759706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.537 qpair failed and we were unable to recover it. 00:34:47.537 [2024-07-14 09:44:31.759890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.537 [2024-07-14 09:44:31.759917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.537 qpair failed and we were unable to recover it. 00:34:47.537 [2024-07-14 09:44:31.760158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.537 [2024-07-14 09:44:31.760189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.537 qpair failed and we were unable to recover it. 00:34:47.537 [2024-07-14 09:44:31.760394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.537 [2024-07-14 09:44:31.760423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.537 qpair failed and we were unable to recover it. 00:34:47.537 [2024-07-14 09:44:31.760661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.537 [2024-07-14 09:44:31.760688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.537 qpair failed and we were unable to recover it. 00:34:47.537 [2024-07-14 09:44:31.760903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.537 [2024-07-14 09:44:31.760938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.537 qpair failed and we were unable to recover it. 00:34:47.537 [2024-07-14 09:44:31.761149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.537 [2024-07-14 09:44:31.761179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.537 qpair failed and we were unable to recover it. 00:34:47.537 [2024-07-14 09:44:31.761386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.537 [2024-07-14 09:44:31.761413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.537 qpair failed and we were unable to recover it. 00:34:47.537 [2024-07-14 09:44:31.761641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.537 [2024-07-14 09:44:31.761671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.537 qpair failed and we were unable to recover it. 00:34:47.537 [2024-07-14 09:44:31.761914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.537 [2024-07-14 09:44:31.761944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.537 qpair failed and we were unable to recover it. 00:34:47.537 [2024-07-14 09:44:31.762179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.537 [2024-07-14 09:44:31.762206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.537 qpair failed and we were unable to recover it. 00:34:47.537 [2024-07-14 09:44:31.762460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.537 [2024-07-14 09:44:31.762490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.537 qpair failed and we were unable to recover it. 00:34:47.537 [2024-07-14 09:44:31.762728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.537 [2024-07-14 09:44:31.762755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.537 qpair failed and we were unable to recover it. 00:34:47.537 [2024-07-14 09:44:31.762974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.537 [2024-07-14 09:44:31.763002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.537 qpair failed and we were unable to recover it. 00:34:47.537 [2024-07-14 09:44:31.763209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.537 [2024-07-14 09:44:31.763237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.537 qpair failed and we were unable to recover it. 00:34:47.537 [2024-07-14 09:44:31.763415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.537 [2024-07-14 09:44:31.763445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.537 qpair failed and we were unable to recover it. 00:34:47.537 [2024-07-14 09:44:31.763687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.537 [2024-07-14 09:44:31.763714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.537 qpair failed and we were unable to recover it. 00:34:47.537 [2024-07-14 09:44:31.763919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.537 [2024-07-14 09:44:31.763947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.537 qpair failed and we were unable to recover it. 00:34:47.537 [2024-07-14 09:44:31.764174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.537 [2024-07-14 09:44:31.764204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.537 qpair failed and we were unable to recover it. 00:34:47.537 [2024-07-14 09:44:31.764429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.537 [2024-07-14 09:44:31.764456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.537 qpair failed and we were unable to recover it. 00:34:47.538 [2024-07-14 09:44:31.764650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.538 [2024-07-14 09:44:31.764678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.538 qpair failed and we were unable to recover it. 00:34:47.538 [2024-07-14 09:44:31.764843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.538 [2024-07-14 09:44:31.764878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.538 qpair failed and we were unable to recover it. 00:34:47.538 [2024-07-14 09:44:31.765099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.538 [2024-07-14 09:44:31.765126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.538 qpair failed and we were unable to recover it. 00:34:47.538 [2024-07-14 09:44:31.765372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.538 [2024-07-14 09:44:31.765401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.538 qpair failed and we were unable to recover it. 00:34:47.538 [2024-07-14 09:44:31.765592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.538 [2024-07-14 09:44:31.765622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.538 qpair failed and we were unable to recover it. 00:34:47.538 [2024-07-14 09:44:31.765804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.538 [2024-07-14 09:44:31.765831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.538 qpair failed and we were unable to recover it. 00:34:47.538 [2024-07-14 09:44:31.766039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.538 [2024-07-14 09:44:31.766067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.538 qpair failed and we were unable to recover it. 00:34:47.538 [2024-07-14 09:44:31.766256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.538 [2024-07-14 09:44:31.766284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.538 qpair failed and we were unable to recover it. 00:34:47.538 [2024-07-14 09:44:31.766512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.538 [2024-07-14 09:44:31.766539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.538 qpair failed and we were unable to recover it. 00:34:47.538 [2024-07-14 09:44:31.766756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.538 [2024-07-14 09:44:31.766798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.538 qpair failed and we were unable to recover it. 00:34:47.538 [2024-07-14 09:44:31.767020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.538 [2024-07-14 09:44:31.767047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.538 qpair failed and we were unable to recover it. 00:34:47.538 [2024-07-14 09:44:31.767235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.538 [2024-07-14 09:44:31.767262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.538 qpair failed and we were unable to recover it. 00:34:47.538 [2024-07-14 09:44:31.767484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.538 [2024-07-14 09:44:31.767514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.538 qpair failed and we were unable to recover it. 00:34:47.538 [2024-07-14 09:44:31.767756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.538 [2024-07-14 09:44:31.767783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.538 qpair failed and we were unable to recover it. 00:34:47.538 [2024-07-14 09:44:31.768009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.538 [2024-07-14 09:44:31.768037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.538 qpair failed and we were unable to recover it. 00:34:47.538 [2024-07-14 09:44:31.768301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.538 [2024-07-14 09:44:31.768328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.538 qpair failed and we were unable to recover it. 00:34:47.538 [2024-07-14 09:44:31.768517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.538 [2024-07-14 09:44:31.768544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.538 qpair failed and we were unable to recover it. 00:34:47.538 [2024-07-14 09:44:31.768797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.538 [2024-07-14 09:44:31.768824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.538 qpair failed and we were unable to recover it. 00:34:47.538 [2024-07-14 09:44:31.769026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.538 [2024-07-14 09:44:31.769054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.538 qpair failed and we were unable to recover it. 00:34:47.538 [2024-07-14 09:44:31.769268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.538 [2024-07-14 09:44:31.769299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.538 qpair failed and we were unable to recover it. 00:34:47.538 [2024-07-14 09:44:31.769512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.538 [2024-07-14 09:44:31.769539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.538 qpair failed and we were unable to recover it. 00:34:47.538 [2024-07-14 09:44:31.769726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.538 [2024-07-14 09:44:31.769755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.538 qpair failed and we were unable to recover it. 00:34:47.538 [2024-07-14 09:44:31.769977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.538 [2024-07-14 09:44:31.770005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.538 qpair failed and we were unable to recover it. 00:34:47.538 [2024-07-14 09:44:31.770221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.538 [2024-07-14 09:44:31.770249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.538 qpair failed and we were unable to recover it. 00:34:47.538 [2024-07-14 09:44:31.770495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.538 [2024-07-14 09:44:31.770525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.538 qpair failed and we were unable to recover it. 00:34:47.538 [2024-07-14 09:44:31.770746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.538 [2024-07-14 09:44:31.770776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.538 qpair failed and we were unable to recover it. 00:34:47.538 [2024-07-14 09:44:31.770978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.538 [2024-07-14 09:44:31.771010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.538 qpair failed and we were unable to recover it. 00:34:47.538 [2024-07-14 09:44:31.771218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.538 [2024-07-14 09:44:31.771246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.538 qpair failed and we were unable to recover it. 00:34:47.538 [2024-07-14 09:44:31.771435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.538 [2024-07-14 09:44:31.771465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.538 qpair failed and we were unable to recover it. 00:34:47.538 [2024-07-14 09:44:31.771656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.538 [2024-07-14 09:44:31.771683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.538 qpair failed and we were unable to recover it. 00:34:47.538 [2024-07-14 09:44:31.771876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.538 [2024-07-14 09:44:31.771903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.538 qpair failed and we were unable to recover it. 00:34:47.538 [2024-07-14 09:44:31.772102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.538 [2024-07-14 09:44:31.772130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.538 qpair failed and we were unable to recover it. 00:34:47.538 [2024-07-14 09:44:31.772356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.538 [2024-07-14 09:44:31.772383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.538 qpair failed and we were unable to recover it. 00:34:47.538 [2024-07-14 09:44:31.772637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.538 [2024-07-14 09:44:31.772664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.538 qpair failed and we were unable to recover it. 00:34:47.538 [2024-07-14 09:44:31.772906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.538 [2024-07-14 09:44:31.772936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.538 qpair failed and we were unable to recover it. 00:34:47.538 [2024-07-14 09:44:31.773177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.538 [2024-07-14 09:44:31.773204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.538 qpair failed and we were unable to recover it. 00:34:47.538 [2024-07-14 09:44:31.773428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.538 [2024-07-14 09:44:31.773458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.538 qpair failed and we were unable to recover it. 00:34:47.538 [2024-07-14 09:44:31.773659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.538 [2024-07-14 09:44:31.773688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.538 qpair failed and we were unable to recover it. 00:34:47.538 [2024-07-14 09:44:31.773936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.538 [2024-07-14 09:44:31.773963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.538 qpair failed and we were unable to recover it. 00:34:47.538 [2024-07-14 09:44:31.774165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.538 [2024-07-14 09:44:31.774195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.538 qpair failed and we were unable to recover it. 00:34:47.538 [2024-07-14 09:44:31.774443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.538 [2024-07-14 09:44:31.774474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.538 qpair failed and we were unable to recover it. 00:34:47.538 [2024-07-14 09:44:31.774683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.539 [2024-07-14 09:44:31.774711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.539 qpair failed and we were unable to recover it. 00:34:47.539 [2024-07-14 09:44:31.774884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.539 [2024-07-14 09:44:31.774912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.539 qpair failed and we were unable to recover it. 00:34:47.539 [2024-07-14 09:44:31.775105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.539 [2024-07-14 09:44:31.775133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.539 qpair failed and we were unable to recover it. 00:34:47.539 [2024-07-14 09:44:31.775325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.539 [2024-07-14 09:44:31.775352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.539 qpair failed and we were unable to recover it. 00:34:47.539 [2024-07-14 09:44:31.775559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.539 [2024-07-14 09:44:31.775589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.539 qpair failed and we were unable to recover it. 00:34:47.539 [2024-07-14 09:44:31.775804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.539 [2024-07-14 09:44:31.775831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.539 qpair failed and we were unable to recover it. 00:34:47.539 [2024-07-14 09:44:31.776026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.539 [2024-07-14 09:44:31.776053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.539 qpair failed and we were unable to recover it. 00:34:47.539 [2024-07-14 09:44:31.776303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.539 [2024-07-14 09:44:31.776330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.539 qpair failed and we were unable to recover it. 00:34:47.539 [2024-07-14 09:44:31.776493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.539 [2024-07-14 09:44:31.776520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.539 qpair failed and we were unable to recover it. 00:34:47.539 [2024-07-14 09:44:31.776709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.539 [2024-07-14 09:44:31.776736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.539 qpair failed and we were unable to recover it. 00:34:47.539 [2024-07-14 09:44:31.776904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.539 [2024-07-14 09:44:31.776932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.539 qpair failed and we were unable to recover it. 00:34:47.539 [2024-07-14 09:44:31.777121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.539 [2024-07-14 09:44:31.777148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.539 qpair failed and we were unable to recover it. 00:34:47.539 [2024-07-14 09:44:31.777331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.539 [2024-07-14 09:44:31.777363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.539 qpair failed and we were unable to recover it. 00:34:47.539 [2024-07-14 09:44:31.777546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.539 [2024-07-14 09:44:31.777576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.539 qpair failed and we were unable to recover it. 00:34:47.539 [2024-07-14 09:44:31.777818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.539 [2024-07-14 09:44:31.777845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.539 qpair failed and we were unable to recover it. 00:34:47.539 [2024-07-14 09:44:31.778043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.539 [2024-07-14 09:44:31.778070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.539 qpair failed and we were unable to recover it. 00:34:47.539 [2024-07-14 09:44:31.778288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.539 [2024-07-14 09:44:31.778331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.539 qpair failed and we were unable to recover it. 00:34:47.539 [2024-07-14 09:44:31.778544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.539 [2024-07-14 09:44:31.778574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.539 qpair failed and we were unable to recover it. 00:34:47.539 [2024-07-14 09:44:31.778796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.539 [2024-07-14 09:44:31.778823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.539 qpair failed and we were unable to recover it. 00:34:47.539 [2024-07-14 09:44:31.779022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.539 [2024-07-14 09:44:31.779049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.539 qpair failed and we were unable to recover it. 00:34:47.539 [2024-07-14 09:44:31.779237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.539 [2024-07-14 09:44:31.779266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.539 qpair failed and we were unable to recover it. 00:34:47.539 [2024-07-14 09:44:31.779501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.539 [2024-07-14 09:44:31.779528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.539 qpair failed and we were unable to recover it. 00:34:47.539 [2024-07-14 09:44:31.779748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.539 [2024-07-14 09:44:31.779779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.539 qpair failed and we were unable to recover it. 00:34:47.539 [2024-07-14 09:44:31.780019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.539 [2024-07-14 09:44:31.780047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.539 qpair failed and we were unable to recover it. 00:34:47.539 [2024-07-14 09:44:31.780238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.539 [2024-07-14 09:44:31.780266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.539 qpair failed and we were unable to recover it. 00:34:47.539 [2024-07-14 09:44:31.780489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.539 [2024-07-14 09:44:31.780519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.539 qpair failed and we were unable to recover it. 00:34:47.539 [2024-07-14 09:44:31.780761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.539 [2024-07-14 09:44:31.780789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.539 qpair failed and we were unable to recover it. 00:34:47.539 [2024-07-14 09:44:31.780980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.539 [2024-07-14 09:44:31.781008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.539 qpair failed and we were unable to recover it. 00:34:47.539 [2024-07-14 09:44:31.781195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.539 [2024-07-14 09:44:31.781222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.539 qpair failed and we were unable to recover it. 00:34:47.539 [2024-07-14 09:44:31.781387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.539 [2024-07-14 09:44:31.781414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.539 qpair failed and we were unable to recover it. 00:34:47.539 [2024-07-14 09:44:31.781571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.539 [2024-07-14 09:44:31.781598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.539 qpair failed and we were unable to recover it. 00:34:47.539 [2024-07-14 09:44:31.781803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.539 [2024-07-14 09:44:31.781833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.539 qpair failed and we were unable to recover it. 00:34:47.539 [2024-07-14 09:44:31.782091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.539 [2024-07-14 09:44:31.782122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.539 qpair failed and we were unable to recover it. 00:34:47.539 [2024-07-14 09:44:31.782357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.539 [2024-07-14 09:44:31.782384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.539 qpair failed and we were unable to recover it. 00:34:47.539 [2024-07-14 09:44:31.782608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.539 [2024-07-14 09:44:31.782638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.539 qpair failed and we were unable to recover it. 00:34:47.539 [2024-07-14 09:44:31.782851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.539 [2024-07-14 09:44:31.782888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.539 qpair failed and we were unable to recover it. 00:34:47.539 [2024-07-14 09:44:31.783078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.539 [2024-07-14 09:44:31.783106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.539 qpair failed and we were unable to recover it. 00:34:47.539 [2024-07-14 09:44:31.783316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.539 [2024-07-14 09:44:31.783346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.539 qpair failed and we were unable to recover it. 00:34:47.539 [2024-07-14 09:44:31.783567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.539 [2024-07-14 09:44:31.783597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.539 qpair failed and we were unable to recover it. 00:34:47.539 [2024-07-14 09:44:31.783831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.539 [2024-07-14 09:44:31.783858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.539 qpair failed and we were unable to recover it. 00:34:47.539 [2024-07-14 09:44:31.784093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.540 [2024-07-14 09:44:31.784123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.540 qpair failed and we were unable to recover it. 00:34:47.540 [2024-07-14 09:44:31.784334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.540 [2024-07-14 09:44:31.784365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.540 qpair failed and we were unable to recover it. 00:34:47.540 [2024-07-14 09:44:31.784574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.540 [2024-07-14 09:44:31.784602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.540 qpair failed and we were unable to recover it. 00:34:47.540 [2024-07-14 09:44:31.784764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.540 [2024-07-14 09:44:31.784791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.540 qpair failed and we were unable to recover it. 00:34:47.540 [2024-07-14 09:44:31.784986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.540 [2024-07-14 09:44:31.785013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.540 qpair failed and we were unable to recover it. 00:34:47.540 [2024-07-14 09:44:31.785213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.540 [2024-07-14 09:44:31.785240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.540 qpair failed and we were unable to recover it. 00:34:47.540 [2024-07-14 09:44:31.785482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.540 [2024-07-14 09:44:31.785511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.540 qpair failed and we were unable to recover it. 00:34:47.540 [2024-07-14 09:44:31.785685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.540 [2024-07-14 09:44:31.785714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.540 qpair failed and we were unable to recover it. 00:34:47.540 [2024-07-14 09:44:31.785926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.540 [2024-07-14 09:44:31.785955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.540 qpair failed and we were unable to recover it. 00:34:47.540 [2024-07-14 09:44:31.786174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.540 [2024-07-14 09:44:31.786205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.540 qpair failed and we were unable to recover it. 00:34:47.540 [2024-07-14 09:44:31.786421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.540 [2024-07-14 09:44:31.786448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.540 qpair failed and we were unable to recover it. 00:34:47.540 [2024-07-14 09:44:31.786665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.540 [2024-07-14 09:44:31.786692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.540 qpair failed and we were unable to recover it. 00:34:47.540 [2024-07-14 09:44:31.786944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.540 [2024-07-14 09:44:31.786974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.540 qpair failed and we were unable to recover it. 00:34:47.540 [2024-07-14 09:44:31.787176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.540 [2024-07-14 09:44:31.787210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.540 qpair failed and we were unable to recover it. 00:34:47.540 [2024-07-14 09:44:31.787426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.540 [2024-07-14 09:44:31.787453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.540 qpair failed and we were unable to recover it. 00:34:47.540 [2024-07-14 09:44:31.787642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.540 [2024-07-14 09:44:31.787672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.540 qpair failed and we were unable to recover it. 00:34:47.540 [2024-07-14 09:44:31.787884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.540 [2024-07-14 09:44:31.787914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.540 qpair failed and we were unable to recover it. 00:34:47.540 [2024-07-14 09:44:31.788120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.540 [2024-07-14 09:44:31.788147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.540 qpair failed and we were unable to recover it. 00:34:47.540 [2024-07-14 09:44:31.788317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.540 [2024-07-14 09:44:31.788344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.540 qpair failed and we were unable to recover it. 00:34:47.540 [2024-07-14 09:44:31.788536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.540 [2024-07-14 09:44:31.788563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.540 qpair failed and we were unable to recover it. 00:34:47.540 [2024-07-14 09:44:31.788783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.540 [2024-07-14 09:44:31.788810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.540 qpair failed and we were unable to recover it. 00:34:47.540 [2024-07-14 09:44:31.788976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.540 [2024-07-14 09:44:31.789005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.540 qpair failed and we were unable to recover it. 00:34:47.540 [2024-07-14 09:44:31.789214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.540 [2024-07-14 09:44:31.789244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.540 qpair failed and we were unable to recover it. 00:34:47.540 [2024-07-14 09:44:31.789450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.540 [2024-07-14 09:44:31.789477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.540 qpair failed and we were unable to recover it. 00:34:47.540 [2024-07-14 09:44:31.789694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.540 [2024-07-14 09:44:31.789724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.540 qpair failed and we were unable to recover it. 00:34:47.540 [2024-07-14 09:44:31.789909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.540 [2024-07-14 09:44:31.789940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.540 qpair failed and we were unable to recover it. 00:34:47.540 [2024-07-14 09:44:31.790145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.540 [2024-07-14 09:44:31.790172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.540 qpair failed and we were unable to recover it. 00:34:47.540 [2024-07-14 09:44:31.790410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.540 [2024-07-14 09:44:31.790440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.540 qpair failed and we were unable to recover it. 00:34:47.540 [2024-07-14 09:44:31.790654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.540 [2024-07-14 09:44:31.790684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.540 qpair failed and we were unable to recover it. 00:34:47.540 [2024-07-14 09:44:31.790890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.540 [2024-07-14 09:44:31.790918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.540 qpair failed and we were unable to recover it. 00:34:47.540 [2024-07-14 09:44:31.791158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.540 [2024-07-14 09:44:31.791185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.540 qpair failed and we were unable to recover it. 00:34:47.540 [2024-07-14 09:44:31.791372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.540 [2024-07-14 09:44:31.791399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.540 qpair failed and we were unable to recover it. 00:34:47.540 [2024-07-14 09:44:31.791604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.540 [2024-07-14 09:44:31.791631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.540 qpair failed and we were unable to recover it. 00:34:47.540 [2024-07-14 09:44:31.791849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.540 [2024-07-14 09:44:31.791885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.540 qpair failed and we were unable to recover it. 00:34:47.540 [2024-07-14 09:44:31.792101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.540 [2024-07-14 09:44:31.792130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-14 09:44:31.792361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.541 [2024-07-14 09:44:31.792388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-14 09:44:31.792610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.541 [2024-07-14 09:44:31.792640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-14 09:44:31.792880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.541 [2024-07-14 09:44:31.792919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-14 09:44:31.793140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.541 [2024-07-14 09:44:31.793167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-14 09:44:31.793374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.541 [2024-07-14 09:44:31.793404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-14 09:44:31.793615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.541 [2024-07-14 09:44:31.793645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-14 09:44:31.793836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.541 [2024-07-14 09:44:31.793862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-14 09:44:31.794102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.541 [2024-07-14 09:44:31.794134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-14 09:44:31.794377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.541 [2024-07-14 09:44:31.794407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-14 09:44:31.794603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.541 [2024-07-14 09:44:31.794630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-14 09:44:31.794823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.541 [2024-07-14 09:44:31.794851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-14 09:44:31.795088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.541 [2024-07-14 09:44:31.795117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-14 09:44:31.795305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.541 [2024-07-14 09:44:31.795333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-14 09:44:31.795576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.541 [2024-07-14 09:44:31.795605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-14 09:44:31.795816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.541 [2024-07-14 09:44:31.795846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-14 09:44:31.796042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.541 [2024-07-14 09:44:31.796069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-14 09:44:31.796258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.541 [2024-07-14 09:44:31.796285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-14 09:44:31.796480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.541 [2024-07-14 09:44:31.796506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-14 09:44:31.796757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.541 [2024-07-14 09:44:31.796784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-14 09:44:31.797025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.541 [2024-07-14 09:44:31.797053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-14 09:44:31.797235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.541 [2024-07-14 09:44:31.797262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-14 09:44:31.797489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.541 [2024-07-14 09:44:31.797516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-14 09:44:31.797731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.541 [2024-07-14 09:44:31.797761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-14 09:44:31.797964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.541 [2024-07-14 09:44:31.797994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-14 09:44:31.798187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.541 [2024-07-14 09:44:31.798214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-14 09:44:31.798454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.541 [2024-07-14 09:44:31.798483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-14 09:44:31.798695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.541 [2024-07-14 09:44:31.798724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-14 09:44:31.798963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.541 [2024-07-14 09:44:31.798990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-14 09:44:31.799233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.541 [2024-07-14 09:44:31.799263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-14 09:44:31.799471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.541 [2024-07-14 09:44:31.799500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-14 09:44:31.799735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.541 [2024-07-14 09:44:31.799761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-14 09:44:31.800005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.541 [2024-07-14 09:44:31.800035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-14 09:44:31.800255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.541 [2024-07-14 09:44:31.800281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-14 09:44:31.800453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.541 [2024-07-14 09:44:31.800479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-14 09:44:31.800694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.541 [2024-07-14 09:44:31.800723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-14 09:44:31.800925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.541 [2024-07-14 09:44:31.800955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-14 09:44:31.801193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.541 [2024-07-14 09:44:31.801219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-14 09:44:31.801448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.541 [2024-07-14 09:44:31.801475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-14 09:44:31.801716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.541 [2024-07-14 09:44:31.801745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-14 09:44:31.801981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.541 [2024-07-14 09:44:31.802008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.541 [2024-07-14 09:44:31.802198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.541 [2024-07-14 09:44:31.802227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.541 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-14 09:44:31.802460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.542 [2024-07-14 09:44:31.802488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-14 09:44:31.802697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.542 [2024-07-14 09:44:31.802723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-14 09:44:31.802970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.542 [2024-07-14 09:44:31.803000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-14 09:44:31.803229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.542 [2024-07-14 09:44:31.803258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-14 09:44:31.803490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.542 [2024-07-14 09:44:31.803516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-14 09:44:31.803731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.542 [2024-07-14 09:44:31.803767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-14 09:44:31.803959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.542 [2024-07-14 09:44:31.803988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-14 09:44:31.804232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.542 [2024-07-14 09:44:31.804258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-14 09:44:31.804503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.542 [2024-07-14 09:44:31.804532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-14 09:44:31.804744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.542 [2024-07-14 09:44:31.804770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-14 09:44:31.804962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.542 [2024-07-14 09:44:31.804990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-14 09:44:31.805173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.542 [2024-07-14 09:44:31.805199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-14 09:44:31.805382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.542 [2024-07-14 09:44:31.805409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-14 09:44:31.805626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.542 [2024-07-14 09:44:31.805652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-14 09:44:31.805844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.542 [2024-07-14 09:44:31.805892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-14 09:44:31.806109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.542 [2024-07-14 09:44:31.806135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-14 09:44:31.806349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.542 [2024-07-14 09:44:31.806376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-14 09:44:31.806616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.542 [2024-07-14 09:44:31.806645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-14 09:44:31.806828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.542 [2024-07-14 09:44:31.806857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-14 09:44:31.807081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.542 [2024-07-14 09:44:31.807108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-14 09:44:31.807319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.542 [2024-07-14 09:44:31.807348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-14 09:44:31.807564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.542 [2024-07-14 09:44:31.807590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-14 09:44:31.807816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.542 [2024-07-14 09:44:31.807842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-14 09:44:31.808063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.542 [2024-07-14 09:44:31.808090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-14 09:44:31.808303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.542 [2024-07-14 09:44:31.808332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-14 09:44:31.808549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.542 [2024-07-14 09:44:31.808575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-14 09:44:31.808819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.542 [2024-07-14 09:44:31.808848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-14 09:44:31.809099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.542 [2024-07-14 09:44:31.809129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-14 09:44:31.809350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.542 [2024-07-14 09:44:31.809376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-14 09:44:31.809583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.542 [2024-07-14 09:44:31.809613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-14 09:44:31.809813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.542 [2024-07-14 09:44:31.809843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-14 09:44:31.810064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.542 [2024-07-14 09:44:31.810091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-14 09:44:31.810334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.542 [2024-07-14 09:44:31.810363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-14 09:44:31.810551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.542 [2024-07-14 09:44:31.810581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-14 09:44:31.810795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.542 [2024-07-14 09:44:31.810821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-14 09:44:31.811022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.542 [2024-07-14 09:44:31.811050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-14 09:44:31.811298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.542 [2024-07-14 09:44:31.811327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-14 09:44:31.811560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.542 [2024-07-14 09:44:31.811586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-14 09:44:31.811782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.542 [2024-07-14 09:44:31.811808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-14 09:44:31.812018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.542 [2024-07-14 09:44:31.812048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-14 09:44:31.812288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.542 [2024-07-14 09:44:31.812315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.542 qpair failed and we were unable to recover it. 00:34:47.542 [2024-07-14 09:44:31.812529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.543 [2024-07-14 09:44:31.812558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.543 qpair failed and we were unable to recover it. 00:34:47.543 [2024-07-14 09:44:31.812795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.543 [2024-07-14 09:44:31.812824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.543 qpair failed and we were unable to recover it. 00:34:47.543 [2024-07-14 09:44:31.813045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.543 [2024-07-14 09:44:31.813071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.543 qpair failed and we were unable to recover it. 00:34:47.543 [2024-07-14 09:44:31.813318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.543 [2024-07-14 09:44:31.813347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.543 qpair failed and we were unable to recover it. 00:34:47.543 [2024-07-14 09:44:31.813584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.543 [2024-07-14 09:44:31.813613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.543 qpair failed and we were unable to recover it. 00:34:47.543 [2024-07-14 09:44:31.813829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.543 [2024-07-14 09:44:31.813855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.543 qpair failed and we were unable to recover it. 00:34:47.543 [2024-07-14 09:44:31.814095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.543 [2024-07-14 09:44:31.814125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.543 qpair failed and we were unable to recover it. 00:34:47.543 [2024-07-14 09:44:31.814361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.543 [2024-07-14 09:44:31.814390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.543 qpair failed and we were unable to recover it. 00:34:47.543 [2024-07-14 09:44:31.814599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.543 [2024-07-14 09:44:31.814626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.543 qpair failed and we were unable to recover it. 00:34:47.543 [2024-07-14 09:44:31.814845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.543 [2024-07-14 09:44:31.814882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.543 qpair failed and we were unable to recover it. 00:34:47.543 [2024-07-14 09:44:31.815102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.543 [2024-07-14 09:44:31.815131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.543 qpair failed and we were unable to recover it. 00:34:47.543 [2024-07-14 09:44:31.815373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.543 [2024-07-14 09:44:31.815399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.543 qpair failed and we were unable to recover it. 00:34:47.543 [2024-07-14 09:44:31.815649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.543 [2024-07-14 09:44:31.815679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.543 qpair failed and we were unable to recover it. 00:34:47.543 [2024-07-14 09:44:31.815886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.543 [2024-07-14 09:44:31.815916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.543 qpair failed and we were unable to recover it. 00:34:47.543 [2024-07-14 09:44:31.816131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.543 [2024-07-14 09:44:31.816157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.543 qpair failed and we were unable to recover it. 00:34:47.543 [2024-07-14 09:44:31.816404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.543 [2024-07-14 09:44:31.816433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.543 qpair failed and we were unable to recover it. 00:34:47.543 [2024-07-14 09:44:31.816646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.543 [2024-07-14 09:44:31.816675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.543 qpair failed and we were unable to recover it. 00:34:47.543 [2024-07-14 09:44:31.816924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.543 [2024-07-14 09:44:31.816952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.543 qpair failed and we were unable to recover it. 00:34:47.543 [2024-07-14 09:44:31.817179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.543 [2024-07-14 09:44:31.817205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.543 qpair failed and we were unable to recover it. 00:34:47.543 [2024-07-14 09:44:31.817425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.543 [2024-07-14 09:44:31.817455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.543 qpair failed and we were unable to recover it. 00:34:47.543 [2024-07-14 09:44:31.817672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.543 [2024-07-14 09:44:31.817698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.543 qpair failed and we were unable to recover it. 00:34:47.543 [2024-07-14 09:44:31.817884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.543 [2024-07-14 09:44:31.817914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.543 qpair failed and we were unable to recover it. 00:34:47.543 [2024-07-14 09:44:31.818121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.543 [2024-07-14 09:44:31.818150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.543 qpair failed and we were unable to recover it. 00:34:47.543 [2024-07-14 09:44:31.818355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.543 [2024-07-14 09:44:31.818382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.543 qpair failed and we were unable to recover it. 00:34:47.543 [2024-07-14 09:44:31.818576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.543 [2024-07-14 09:44:31.818602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.543 qpair failed and we were unable to recover it. 00:34:47.543 [2024-07-14 09:44:31.818823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.543 [2024-07-14 09:44:31.818852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.543 qpair failed and we were unable to recover it. 00:34:47.543 [2024-07-14 09:44:31.819073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.543 [2024-07-14 09:44:31.819099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.543 qpair failed and we were unable to recover it. 00:34:47.543 [2024-07-14 09:44:31.819319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.543 [2024-07-14 09:44:31.819348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.543 qpair failed and we were unable to recover it. 00:34:47.543 [2024-07-14 09:44:31.819559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.543 [2024-07-14 09:44:31.819589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.543 qpair failed and we were unable to recover it. 00:34:47.543 [2024-07-14 09:44:31.819822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.543 [2024-07-14 09:44:31.819849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.543 qpair failed and we were unable to recover it. 00:34:47.543 [2024-07-14 09:44:31.820102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.543 [2024-07-14 09:44:31.820132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.543 qpair failed and we were unable to recover it. 00:34:47.543 [2024-07-14 09:44:31.820358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.543 [2024-07-14 09:44:31.820388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.543 qpair failed and we were unable to recover it. 00:34:47.543 [2024-07-14 09:44:31.820595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.543 [2024-07-14 09:44:31.820625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.543 qpair failed and we were unable to recover it. 00:34:47.543 [2024-07-14 09:44:31.820864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.543 [2024-07-14 09:44:31.820902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.543 qpair failed and we were unable to recover it. 00:34:47.543 [2024-07-14 09:44:31.821107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.543 [2024-07-14 09:44:31.821137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.543 qpair failed and we were unable to recover it. 00:34:47.543 [2024-07-14 09:44:31.821345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.543 [2024-07-14 09:44:31.821371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.543 qpair failed and we were unable to recover it. 00:34:47.543 [2024-07-14 09:44:31.821580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.543 [2024-07-14 09:44:31.821610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.543 qpair failed and we were unable to recover it. 00:34:47.543 [2024-07-14 09:44:31.821845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.543 [2024-07-14 09:44:31.821893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.543 qpair failed and we were unable to recover it. 00:34:47.543 [2024-07-14 09:44:31.822115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.543 [2024-07-14 09:44:31.822141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.543 qpair failed and we were unable to recover it. 00:34:47.543 [2024-07-14 09:44:31.822384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.543 [2024-07-14 09:44:31.822413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.543 qpair failed and we were unable to recover it. 00:34:47.543 [2024-07-14 09:44:31.822628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.543 [2024-07-14 09:44:31.822657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.543 qpair failed and we were unable to recover it. 00:34:47.543 [2024-07-14 09:44:31.822847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.543 [2024-07-14 09:44:31.822883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.543 qpair failed and we were unable to recover it. 00:34:47.543 [2024-07-14 09:44:31.823100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.544 [2024-07-14 09:44:31.823143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.544 qpair failed and we were unable to recover it. 00:34:47.544 [2024-07-14 09:44:31.823328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.544 [2024-07-14 09:44:31.823358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.544 qpair failed and we were unable to recover it. 00:34:47.544 [2024-07-14 09:44:31.823568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.544 [2024-07-14 09:44:31.823595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.544 qpair failed and we were unable to recover it. 00:34:47.544 [2024-07-14 09:44:31.823828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.544 [2024-07-14 09:44:31.823857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.544 qpair failed and we were unable to recover it. 00:34:47.544 [2024-07-14 09:44:31.824090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.544 [2024-07-14 09:44:31.824119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.544 qpair failed and we were unable to recover it. 00:34:47.544 [2024-07-14 09:44:31.824330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.544 [2024-07-14 09:44:31.824356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.544 qpair failed and we were unable to recover it. 00:34:47.544 [2024-07-14 09:44:31.824596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.544 [2024-07-14 09:44:31.824625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.544 qpair failed and we were unable to recover it. 00:34:47.544 [2024-07-14 09:44:31.824859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.544 [2024-07-14 09:44:31.824897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.544 qpair failed and we were unable to recover it. 00:34:47.544 [2024-07-14 09:44:31.825136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.544 [2024-07-14 09:44:31.825162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.544 qpair failed and we were unable to recover it. 00:34:47.544 [2024-07-14 09:44:31.825377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.544 [2024-07-14 09:44:31.825407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.544 qpair failed and we were unable to recover it. 00:34:47.544 [2024-07-14 09:44:31.825596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.544 [2024-07-14 09:44:31.825625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.544 qpair failed and we were unable to recover it. 00:34:47.544 [2024-07-14 09:44:31.825883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.544 [2024-07-14 09:44:31.825911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.544 qpair failed and we were unable to recover it. 00:34:47.544 [2024-07-14 09:44:31.826140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.544 [2024-07-14 09:44:31.826169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.544 qpair failed and we were unable to recover it. 00:34:47.544 [2024-07-14 09:44:31.826385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.544 [2024-07-14 09:44:31.826414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.544 qpair failed and we were unable to recover it. 00:34:47.544 [2024-07-14 09:44:31.826597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.544 [2024-07-14 09:44:31.826623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.544 qpair failed and we were unable to recover it. 00:34:47.544 [2024-07-14 09:44:31.826860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.544 [2024-07-14 09:44:31.826899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.544 qpair failed and we were unable to recover it. 00:34:47.544 [2024-07-14 09:44:31.827111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.544 [2024-07-14 09:44:31.827141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.544 qpair failed and we were unable to recover it. 00:34:47.544 [2024-07-14 09:44:31.827379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.544 [2024-07-14 09:44:31.827406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.544 qpair failed and we were unable to recover it. 00:34:47.544 [2024-07-14 09:44:31.827663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.544 [2024-07-14 09:44:31.827690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.544 qpair failed and we were unable to recover it. 00:34:47.544 [2024-07-14 09:44:31.827914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.544 [2024-07-14 09:44:31.827941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.544 qpair failed and we were unable to recover it. 00:34:47.544 [2024-07-14 09:44:31.828166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.544 [2024-07-14 09:44:31.828192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.544 qpair failed and we were unable to recover it. 00:34:47.544 [2024-07-14 09:44:31.828422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.544 [2024-07-14 09:44:31.828448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.544 qpair failed and we were unable to recover it. 00:34:47.544 [2024-07-14 09:44:31.828667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.544 [2024-07-14 09:44:31.828710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.544 qpair failed and we were unable to recover it. 00:34:47.544 [2024-07-14 09:44:31.828889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.544 [2024-07-14 09:44:31.828916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.544 qpair failed and we were unable to recover it. 00:34:47.544 [2024-07-14 09:44:31.829084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.544 [2024-07-14 09:44:31.829111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.544 qpair failed and we were unable to recover it. 00:34:47.544 [2024-07-14 09:44:31.829356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.544 [2024-07-14 09:44:31.829386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.544 qpair failed and we were unable to recover it. 00:34:47.544 [2024-07-14 09:44:31.829620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.544 [2024-07-14 09:44:31.829647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.544 qpair failed and we were unable to recover it. 00:34:47.544 [2024-07-14 09:44:31.829886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.544 [2024-07-14 09:44:31.829916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.544 qpair failed and we were unable to recover it. 00:34:47.544 [2024-07-14 09:44:31.830093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.544 [2024-07-14 09:44:31.830122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.544 qpair failed and we were unable to recover it. 00:34:47.544 [2024-07-14 09:44:31.830311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.544 [2024-07-14 09:44:31.830337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.544 qpair failed and we were unable to recover it. 00:34:47.544 [2024-07-14 09:44:31.830549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.544 [2024-07-14 09:44:31.830578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.544 qpair failed and we were unable to recover it. 00:34:47.544 [2024-07-14 09:44:31.830810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.544 [2024-07-14 09:44:31.830843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.544 qpair failed and we were unable to recover it. 00:34:47.544 [2024-07-14 09:44:31.831095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.544 [2024-07-14 09:44:31.831122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.544 qpair failed and we were unable to recover it. 00:34:47.544 [2024-07-14 09:44:31.831330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.544 [2024-07-14 09:44:31.831358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.544 qpair failed and we were unable to recover it. 00:34:47.544 [2024-07-14 09:44:31.831549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.544 [2024-07-14 09:44:31.831576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.544 qpair failed and we were unable to recover it. 00:34:47.544 [2024-07-14 09:44:31.831766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.544 [2024-07-14 09:44:31.831793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.544 qpair failed and we were unable to recover it. 00:34:47.544 [2024-07-14 09:44:31.831982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.544 [2024-07-14 09:44:31.832009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.544 qpair failed and we were unable to recover it. 00:34:47.544 [2024-07-14 09:44:31.832225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.544 [2024-07-14 09:44:31.832255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.545 qpair failed and we were unable to recover it. 00:34:47.545 [2024-07-14 09:44:31.832475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.545 [2024-07-14 09:44:31.832502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.545 qpair failed and we were unable to recover it. 00:34:47.545 [2024-07-14 09:44:31.832716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.545 [2024-07-14 09:44:31.832745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.545 qpair failed and we were unable to recover it. 00:34:47.545 [2024-07-14 09:44:31.832963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.545 [2024-07-14 09:44:31.832993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.545 qpair failed and we were unable to recover it. 00:34:47.545 [2024-07-14 09:44:31.833184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.545 [2024-07-14 09:44:31.833211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.545 qpair failed and we were unable to recover it. 00:34:47.545 [2024-07-14 09:44:31.833449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.545 [2024-07-14 09:44:31.833478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.545 qpair failed and we were unable to recover it. 00:34:47.545 [2024-07-14 09:44:31.833688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.545 [2024-07-14 09:44:31.833718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.545 qpair failed and we were unable to recover it. 00:34:47.545 [2024-07-14 09:44:31.833940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.545 [2024-07-14 09:44:31.833967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.545 qpair failed and we were unable to recover it. 00:34:47.545 [2024-07-14 09:44:31.834218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.545 [2024-07-14 09:44:31.834247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.545 qpair failed and we were unable to recover it. 00:34:47.545 [2024-07-14 09:44:31.834497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.545 [2024-07-14 09:44:31.834523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.545 qpair failed and we were unable to recover it. 00:34:47.545 [2024-07-14 09:44:31.834715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.545 [2024-07-14 09:44:31.834742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.545 qpair failed and we were unable to recover it. 00:34:47.545 [2024-07-14 09:44:31.834909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.545 [2024-07-14 09:44:31.834936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.545 qpair failed and we were unable to recover it. 00:34:47.545 [2024-07-14 09:44:31.835153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.545 [2024-07-14 09:44:31.835182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.545 qpair failed and we were unable to recover it. 00:34:47.545 [2024-07-14 09:44:31.835386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.545 [2024-07-14 09:44:31.835413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.545 qpair failed and we were unable to recover it. 00:34:47.545 [2024-07-14 09:44:31.835661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.545 [2024-07-14 09:44:31.835690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.545 qpair failed and we were unable to recover it. 00:34:47.545 [2024-07-14 09:44:31.835864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.545 [2024-07-14 09:44:31.835901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.545 qpair failed and we were unable to recover it. 00:34:47.545 [2024-07-14 09:44:31.836091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.545 [2024-07-14 09:44:31.836117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.545 qpair failed and we were unable to recover it. 00:34:47.545 [2024-07-14 09:44:31.836335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.545 [2024-07-14 09:44:31.836365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.545 qpair failed and we were unable to recover it. 00:34:47.545 [2024-07-14 09:44:31.836551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.545 [2024-07-14 09:44:31.836581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.545 qpair failed and we were unable to recover it. 00:34:47.545 [2024-07-14 09:44:31.836763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.545 [2024-07-14 09:44:31.836789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.545 qpair failed and we were unable to recover it. 00:34:47.545 [2024-07-14 09:44:31.836985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.545 [2024-07-14 09:44:31.837012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.545 qpair failed and we were unable to recover it. 00:34:47.545 [2024-07-14 09:44:31.837227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.545 [2024-07-14 09:44:31.837261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.545 qpair failed and we were unable to recover it. 00:34:47.545 [2024-07-14 09:44:31.837491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.545 [2024-07-14 09:44:31.837518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.545 qpair failed and we were unable to recover it. 00:34:47.545 [2024-07-14 09:44:31.837740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.545 [2024-07-14 09:44:31.837769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.545 qpair failed and we were unable to recover it. 00:34:47.545 [2024-07-14 09:44:31.838014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.545 [2024-07-14 09:44:31.838044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.545 qpair failed and we were unable to recover it. 00:34:47.545 [2024-07-14 09:44:31.838248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.545 [2024-07-14 09:44:31.838275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.545 qpair failed and we were unable to recover it. 00:34:47.545 [2024-07-14 09:44:31.838515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.545 [2024-07-14 09:44:31.838545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.545 qpair failed and we were unable to recover it. 00:34:47.545 [2024-07-14 09:44:31.838733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.545 [2024-07-14 09:44:31.838763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.545 qpair failed and we were unable to recover it. 00:34:47.545 [2024-07-14 09:44:31.838980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.545 [2024-07-14 09:44:31.839007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.545 qpair failed and we were unable to recover it. 00:34:47.545 [2024-07-14 09:44:31.839225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.545 [2024-07-14 09:44:31.839269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.545 qpair failed and we were unable to recover it. 00:34:47.545 [2024-07-14 09:44:31.839509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.545 [2024-07-14 09:44:31.839536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.545 qpair failed and we were unable to recover it. 00:34:47.545 [2024-07-14 09:44:31.839722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.545 [2024-07-14 09:44:31.839749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.545 qpair failed and we were unable to recover it. 00:34:47.545 [2024-07-14 09:44:31.839973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.545 [2024-07-14 09:44:31.840003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.545 qpair failed and we were unable to recover it. 00:34:47.545 [2024-07-14 09:44:31.840192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.545 [2024-07-14 09:44:31.840221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.545 qpair failed and we were unable to recover it. 00:34:47.545 [2024-07-14 09:44:31.840462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.545 [2024-07-14 09:44:31.840489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.545 qpair failed and we were unable to recover it. 00:34:47.545 [2024-07-14 09:44:31.840681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.545 [2024-07-14 09:44:31.840712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.545 qpair failed and we were unable to recover it. 00:34:47.545 [2024-07-14 09:44:31.840919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.545 [2024-07-14 09:44:31.840949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.545 qpair failed and we were unable to recover it. 00:34:47.545 [2024-07-14 09:44:31.841165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.545 [2024-07-14 09:44:31.841199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.545 qpair failed and we were unable to recover it. 00:34:47.545 [2024-07-14 09:44:31.841388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.545 [2024-07-14 09:44:31.841420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.545 qpair failed and we were unable to recover it. 00:34:47.545 [2024-07-14 09:44:31.841636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.546 [2024-07-14 09:44:31.841666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.546 qpair failed and we were unable to recover it. 00:34:47.546 [2024-07-14 09:44:31.841924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.546 [2024-07-14 09:44:31.841951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.546 qpair failed and we were unable to recover it. 00:34:47.546 [2024-07-14 09:44:31.842163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.546 [2024-07-14 09:44:31.842192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.546 qpair failed and we were unable to recover it. 00:34:47.546 [2024-07-14 09:44:31.842401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.546 [2024-07-14 09:44:31.842430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.546 qpair failed and we were unable to recover it. 00:34:47.546 [2024-07-14 09:44:31.842661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.546 [2024-07-14 09:44:31.842687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.546 qpair failed and we were unable to recover it. 00:34:47.546 [2024-07-14 09:44:31.842915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.546 [2024-07-14 09:44:31.842945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.546 qpair failed and we were unable to recover it. 00:34:47.546 [2024-07-14 09:44:31.843157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.546 [2024-07-14 09:44:31.843188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.546 qpair failed and we were unable to recover it. 00:34:47.546 [2024-07-14 09:44:31.843382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.546 [2024-07-14 09:44:31.843408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.546 qpair failed and we were unable to recover it. 00:34:47.546 [2024-07-14 09:44:31.843591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.546 [2024-07-14 09:44:31.843617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.546 qpair failed and we were unable to recover it. 00:34:47.546 [2024-07-14 09:44:31.843851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.546 [2024-07-14 09:44:31.843887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.546 qpair failed and we were unable to recover it. 00:34:47.546 [2024-07-14 09:44:31.844150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.546 [2024-07-14 09:44:31.844177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.546 qpair failed and we were unable to recover it. 00:34:47.546 [2024-07-14 09:44:31.844424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.546 [2024-07-14 09:44:31.844454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.546 qpair failed and we were unable to recover it. 00:34:47.546 [2024-07-14 09:44:31.844629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.546 [2024-07-14 09:44:31.844659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.546 qpair failed and we were unable to recover it. 00:34:47.546 [2024-07-14 09:44:31.844874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.546 [2024-07-14 09:44:31.844901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.546 qpair failed and we were unable to recover it. 00:34:47.546 [2024-07-14 09:44:31.845128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.546 [2024-07-14 09:44:31.845158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.546 qpair failed and we were unable to recover it. 00:34:47.546 [2024-07-14 09:44:31.845367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.546 [2024-07-14 09:44:31.845393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.546 qpair failed and we were unable to recover it. 00:34:47.546 [2024-07-14 09:44:31.845562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.546 [2024-07-14 09:44:31.845588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.546 qpair failed and we were unable to recover it. 00:34:47.546 [2024-07-14 09:44:31.845799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.546 [2024-07-14 09:44:31.845828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.546 qpair failed and we were unable to recover it. 00:34:47.546 [2024-07-14 09:44:31.846034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.546 [2024-07-14 09:44:31.846064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.546 qpair failed and we were unable to recover it. 00:34:47.546 [2024-07-14 09:44:31.846297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.546 [2024-07-14 09:44:31.846323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.546 qpair failed and we were unable to recover it. 00:34:47.546 [2024-07-14 09:44:31.846508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.546 [2024-07-14 09:44:31.846538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.546 qpair failed and we were unable to recover it. 00:34:47.546 [2024-07-14 09:44:31.846758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.546 [2024-07-14 09:44:31.846789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.546 qpair failed and we were unable to recover it. 00:34:47.546 [2024-07-14 09:44:31.846988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.546 [2024-07-14 09:44:31.847015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.546 qpair failed and we were unable to recover it. 00:34:47.546 [2024-07-14 09:44:31.847178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.546 [2024-07-14 09:44:31.847209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.546 qpair failed and we were unable to recover it. 00:34:47.546 [2024-07-14 09:44:31.847396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.546 [2024-07-14 09:44:31.847422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.546 qpair failed and we were unable to recover it. 00:34:47.546 [2024-07-14 09:44:31.847651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.546 [2024-07-14 09:44:31.847677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.546 qpair failed and we were unable to recover it. 00:34:47.546 [2024-07-14 09:44:31.847881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.546 [2024-07-14 09:44:31.847921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.546 qpair failed and we were unable to recover it. 00:34:47.546 [2024-07-14 09:44:31.848131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.546 [2024-07-14 09:44:31.848160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.546 qpair failed and we were unable to recover it. 00:34:47.546 [2024-07-14 09:44:31.848349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.546 [2024-07-14 09:44:31.848375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.546 qpair failed and we were unable to recover it. 00:34:47.546 [2024-07-14 09:44:31.848566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.546 [2024-07-14 09:44:31.848592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.546 qpair failed and we were unable to recover it. 00:34:47.546 [2024-07-14 09:44:31.848790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.546 [2024-07-14 09:44:31.848819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.546 qpair failed and we were unable to recover it. 00:34:47.546 [2024-07-14 09:44:31.849030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.546 [2024-07-14 09:44:31.849057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.546 qpair failed and we were unable to recover it. 00:34:47.546 [2024-07-14 09:44:31.849292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.546 [2024-07-14 09:44:31.849321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.546 qpair failed and we were unable to recover it. 00:34:47.546 [2024-07-14 09:44:31.849569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.546 [2024-07-14 09:44:31.849606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.546 qpair failed and we were unable to recover it. 00:34:47.546 [2024-07-14 09:44:31.849839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.546 [2024-07-14 09:44:31.849881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.546 qpair failed and we were unable to recover it. 00:34:47.546 [2024-07-14 09:44:31.850110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.546 [2024-07-14 09:44:31.850141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.546 qpair failed and we were unable to recover it. 00:34:47.546 [2024-07-14 09:44:31.850352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.546 [2024-07-14 09:44:31.850381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.546 qpair failed and we were unable to recover it. 00:34:47.546 [2024-07-14 09:44:31.850567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.546 [2024-07-14 09:44:31.850593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.546 qpair failed and we were unable to recover it. 00:34:47.546 [2024-07-14 09:44:31.850810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.546 [2024-07-14 09:44:31.850841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.546 qpair failed and we were unable to recover it. 00:34:47.546 [2024-07-14 09:44:31.851068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.546 [2024-07-14 09:44:31.851095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.546 qpair failed and we were unable to recover it. 00:34:47.546 [2024-07-14 09:44:31.851288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.546 [2024-07-14 09:44:31.851315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.546 qpair failed and we were unable to recover it. 00:34:47.546 [2024-07-14 09:44:31.851551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.547 [2024-07-14 09:44:31.851580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.547 qpair failed and we were unable to recover it. 00:34:47.547 [2024-07-14 09:44:31.851796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.547 [2024-07-14 09:44:31.851822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.547 qpair failed and we were unable to recover it. 00:34:47.547 [2024-07-14 09:44:31.852029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.547 [2024-07-14 09:44:31.852056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.547 qpair failed and we were unable to recover it. 00:34:47.547 [2024-07-14 09:44:31.852265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.547 [2024-07-14 09:44:31.852295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.547 qpair failed and we were unable to recover it. 00:34:47.547 [2024-07-14 09:44:31.852516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.547 [2024-07-14 09:44:31.852545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.547 qpair failed and we were unable to recover it. 00:34:47.547 [2024-07-14 09:44:31.852753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.547 [2024-07-14 09:44:31.852780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.547 qpair failed and we were unable to recover it. 00:34:47.547 [2024-07-14 09:44:31.852998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.547 [2024-07-14 09:44:31.853028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.547 qpair failed and we were unable to recover it. 00:34:47.547 [2024-07-14 09:44:31.853274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.547 [2024-07-14 09:44:31.853304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.547 qpair failed and we were unable to recover it. 00:34:47.547 [2024-07-14 09:44:31.853512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.547 [2024-07-14 09:44:31.853539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.547 qpair failed and we were unable to recover it. 00:34:47.547 [2024-07-14 09:44:31.853717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.547 [2024-07-14 09:44:31.853753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.547 qpair failed and we were unable to recover it. 00:34:47.547 [2024-07-14 09:44:31.853942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.547 [2024-07-14 09:44:31.853973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.547 qpair failed and we were unable to recover it. 00:34:47.547 [2024-07-14 09:44:31.854166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.547 [2024-07-14 09:44:31.854193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.547 qpair failed and we were unable to recover it. 00:34:47.547 [2024-07-14 09:44:31.854382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.547 [2024-07-14 09:44:31.854412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.547 qpair failed and we were unable to recover it. 00:34:47.547 [2024-07-14 09:44:31.854622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.547 [2024-07-14 09:44:31.854651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.547 qpair failed and we were unable to recover it. 00:34:47.547 [2024-07-14 09:44:31.854859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.547 [2024-07-14 09:44:31.854893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.547 qpair failed and we were unable to recover it. 00:34:47.547 [2024-07-14 09:44:31.855116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.547 [2024-07-14 09:44:31.855147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.547 qpair failed and we were unable to recover it. 00:34:47.547 [2024-07-14 09:44:31.855363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.547 [2024-07-14 09:44:31.855393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.547 qpair failed and we were unable to recover it. 00:34:47.547 [2024-07-14 09:44:31.855598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.547 [2024-07-14 09:44:31.855626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.547 qpair failed and we were unable to recover it. 00:34:47.547 [2024-07-14 09:44:31.855842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.547 [2024-07-14 09:44:31.855877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.547 qpair failed and we were unable to recover it. 00:34:47.547 [2024-07-14 09:44:31.856120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.547 [2024-07-14 09:44:31.856154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.547 qpair failed and we were unable to recover it. 00:34:47.547 [2024-07-14 09:44:31.856343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.547 [2024-07-14 09:44:31.856369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.547 qpair failed and we were unable to recover it. 00:34:47.547 [2024-07-14 09:44:31.856567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.547 [2024-07-14 09:44:31.856594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.547 qpair failed and we were unable to recover it. 00:34:47.547 [2024-07-14 09:44:31.856775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.547 [2024-07-14 09:44:31.856802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.547 qpair failed and we were unable to recover it. 00:34:47.547 [2024-07-14 09:44:31.857015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.547 [2024-07-14 09:44:31.857057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.547 qpair failed and we were unable to recover it. 00:34:47.547 [2024-07-14 09:44:31.857297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.547 [2024-07-14 09:44:31.857326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.547 qpair failed and we were unable to recover it. 00:34:47.547 [2024-07-14 09:44:31.857576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.547 [2024-07-14 09:44:31.857620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.547 qpair failed and we were unable to recover it. 00:34:47.547 [2024-07-14 09:44:31.857879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.547 [2024-07-14 09:44:31.857908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.547 qpair failed and we were unable to recover it. 00:34:47.547 [2024-07-14 09:44:31.858113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.547 [2024-07-14 09:44:31.858151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.547 qpair failed and we were unable to recover it. 00:34:47.547 [2024-07-14 09:44:31.858375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.547 [2024-07-14 09:44:31.858420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.547 qpair failed and we were unable to recover it. 00:34:47.547 [2024-07-14 09:44:31.858675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.547 [2024-07-14 09:44:31.858724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.547 qpair failed and we were unable to recover it. 00:34:47.547 [2024-07-14 09:44:31.858921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.547 [2024-07-14 09:44:31.858949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.547 qpair failed and we were unable to recover it. 00:34:47.547 [2024-07-14 09:44:31.859118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.547 [2024-07-14 09:44:31.859152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.547 qpair failed and we were unable to recover it. 00:34:47.547 [2024-07-14 09:44:31.859366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.547 [2024-07-14 09:44:31.859410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.547 qpair failed and we were unable to recover it. 00:34:47.547 [2024-07-14 09:44:31.859633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.547 [2024-07-14 09:44:31.859678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.547 qpair failed and we were unable to recover it. 00:34:47.547 [2024-07-14 09:44:31.859877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.547 [2024-07-14 09:44:31.859904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.547 qpair failed and we were unable to recover it. 00:34:47.547 [2024-07-14 09:44:31.860094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.547 [2024-07-14 09:44:31.860122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.547 qpair failed and we were unable to recover it. 00:34:47.547 [2024-07-14 09:44:31.860368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.547 [2024-07-14 09:44:31.860416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.548 qpair failed and we were unable to recover it. 00:34:47.548 [2024-07-14 09:44:31.860638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.548 [2024-07-14 09:44:31.860666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.548 qpair failed and we were unable to recover it. 00:34:47.548 [2024-07-14 09:44:31.860847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.548 [2024-07-14 09:44:31.860882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.548 qpair failed and we were unable to recover it. 00:34:47.548 [2024-07-14 09:44:31.861056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.548 [2024-07-14 09:44:31.861082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.548 qpair failed and we were unable to recover it. 00:34:47.548 [2024-07-14 09:44:31.861332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.548 [2024-07-14 09:44:31.861379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.548 qpair failed and we were unable to recover it. 00:34:47.548 [2024-07-14 09:44:31.861688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.548 [2024-07-14 09:44:31.861739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.548 qpair failed and we were unable to recover it. 00:34:47.548 [2024-07-14 09:44:31.861959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.548 [2024-07-14 09:44:31.861986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.548 qpair failed and we were unable to recover it. 00:34:47.548 [2024-07-14 09:44:31.862170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.548 [2024-07-14 09:44:31.862216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.548 qpair failed and we were unable to recover it. 00:34:47.548 [2024-07-14 09:44:31.862457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.548 [2024-07-14 09:44:31.862500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.548 qpair failed and we were unable to recover it. 00:34:47.548 [2024-07-14 09:44:31.862741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.548 [2024-07-14 09:44:31.862786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.548 qpair failed and we were unable to recover it. 00:34:47.548 [2024-07-14 09:44:31.862983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.548 [2024-07-14 09:44:31.863011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.548 qpair failed and we were unable to recover it. 00:34:47.548 [2024-07-14 09:44:31.863204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.548 [2024-07-14 09:44:31.863253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.548 qpair failed and we were unable to recover it. 00:34:47.548 [2024-07-14 09:44:31.863462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.548 [2024-07-14 09:44:31.863506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.548 qpair failed and we were unable to recover it. 00:34:47.548 [2024-07-14 09:44:31.863709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.548 [2024-07-14 09:44:31.863737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.548 qpair failed and we were unable to recover it. 00:34:47.548 [2024-07-14 09:44:31.863958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.548 [2024-07-14 09:44:31.864003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.548 qpair failed and we were unable to recover it. 00:34:47.548 [2024-07-14 09:44:31.864195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.548 [2024-07-14 09:44:31.864225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.548 qpair failed and we were unable to recover it. 00:34:47.548 [2024-07-14 09:44:31.864456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.548 [2024-07-14 09:44:31.864501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.548 qpair failed and we were unable to recover it. 00:34:47.548 [2024-07-14 09:44:31.864712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.548 [2024-07-14 09:44:31.864739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.548 qpair failed and we were unable to recover it. 00:34:47.548 [2024-07-14 09:44:31.864982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.548 [2024-07-14 09:44:31.865028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.548 qpair failed and we were unable to recover it. 00:34:47.548 [2024-07-14 09:44:31.865224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.548 [2024-07-14 09:44:31.865270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.548 qpair failed and we were unable to recover it. 00:34:47.548 [2024-07-14 09:44:31.865516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.548 [2024-07-14 09:44:31.865560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.548 qpair failed and we were unable to recover it. 00:34:47.548 [2024-07-14 09:44:31.865754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.548 [2024-07-14 09:44:31.865781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.548 qpair failed and we were unable to recover it. 00:34:47.548 [2024-07-14 09:44:31.866000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.548 [2024-07-14 09:44:31.866046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.548 qpair failed and we were unable to recover it. 00:34:47.548 [2024-07-14 09:44:31.866249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.548 [2024-07-14 09:44:31.866293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.548 qpair failed and we were unable to recover it. 00:34:47.548 [2024-07-14 09:44:31.866536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.548 [2024-07-14 09:44:31.866580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.548 qpair failed and we were unable to recover it. 00:34:47.548 [2024-07-14 09:44:31.866777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.548 [2024-07-14 09:44:31.866804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.548 qpair failed and we were unable to recover it. 00:34:47.548 [2024-07-14 09:44:31.866999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.548 [2024-07-14 09:44:31.867044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.548 qpair failed and we were unable to recover it. 00:34:47.548 [2024-07-14 09:44:31.867255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.548 [2024-07-14 09:44:31.867300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.548 qpair failed and we were unable to recover it. 00:34:47.548 [2024-07-14 09:44:31.867551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.548 [2024-07-14 09:44:31.867594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.548 qpair failed and we were unable to recover it. 00:34:47.548 [2024-07-14 09:44:31.867788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.548 [2024-07-14 09:44:31.867815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.548 qpair failed and we were unable to recover it. 00:34:47.548 [2024-07-14 09:44:31.868019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.548 [2024-07-14 09:44:31.868065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.548 qpair failed and we were unable to recover it. 00:34:47.548 [2024-07-14 09:44:31.868307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.548 [2024-07-14 09:44:31.868352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.548 qpair failed and we were unable to recover it. 00:34:47.548 [2024-07-14 09:44:31.868577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.548 [2024-07-14 09:44:31.868625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.548 qpair failed and we were unable to recover it. 00:34:47.548 [2024-07-14 09:44:31.868841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.548 [2024-07-14 09:44:31.868875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.548 qpair failed and we were unable to recover it. 00:34:47.548 [2024-07-14 09:44:31.869052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.548 [2024-07-14 09:44:31.869080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.548 qpair failed and we were unable to recover it. 00:34:47.548 [2024-07-14 09:44:31.869332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.548 [2024-07-14 09:44:31.869376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.548 qpair failed and we were unable to recover it. 00:34:47.548 [2024-07-14 09:44:31.869604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.548 [2024-07-14 09:44:31.869651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.548 qpair failed and we were unable to recover it. 00:34:47.548 [2024-07-14 09:44:31.869873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.548 [2024-07-14 09:44:31.869901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.548 qpair failed and we were unable to recover it. 00:34:47.548 [2024-07-14 09:44:31.870077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.548 [2024-07-14 09:44:31.870107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.548 qpair failed and we were unable to recover it. 00:34:47.548 [2024-07-14 09:44:31.870329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.548 [2024-07-14 09:44:31.870375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.548 qpair failed and we were unable to recover it. 00:34:47.548 [2024-07-14 09:44:31.870658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.548 [2024-07-14 09:44:31.870719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.548 qpair failed and we were unable to recover it. 00:34:47.548 [2024-07-14 09:44:31.870924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.549 [2024-07-14 09:44:31.870954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.549 qpair failed and we were unable to recover it. 00:34:47.549 [2024-07-14 09:44:31.871159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.549 [2024-07-14 09:44:31.871189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.549 qpair failed and we were unable to recover it. 00:34:47.549 [2024-07-14 09:44:31.871416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.549 [2024-07-14 09:44:31.871462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.549 qpair failed and we were unable to recover it. 00:34:47.549 [2024-07-14 09:44:31.871662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.549 [2024-07-14 09:44:31.871718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.549 qpair failed and we were unable to recover it. 00:34:47.549 [2024-07-14 09:44:31.871952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.549 [2024-07-14 09:44:31.871999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.549 qpair failed and we were unable to recover it. 00:34:47.549 [2024-07-14 09:44:31.872197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.549 [2024-07-14 09:44:31.872245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.549 qpair failed and we were unable to recover it. 00:34:47.549 [2024-07-14 09:44:31.872448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.549 [2024-07-14 09:44:31.872499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.549 qpair failed and we were unable to recover it. 00:34:47.549 [2024-07-14 09:44:31.872672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.549 [2024-07-14 09:44:31.872711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.549 qpair failed and we were unable to recover it. 00:34:47.549 [2024-07-14 09:44:31.872931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.549 [2024-07-14 09:44:31.872978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.549 qpair failed and we were unable to recover it. 00:34:47.549 [2024-07-14 09:44:31.873178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.549 [2024-07-14 09:44:31.873222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.549 qpair failed and we were unable to recover it. 00:34:47.549 [2024-07-14 09:44:31.873443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.549 [2024-07-14 09:44:31.873488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.549 qpair failed and we were unable to recover it. 00:34:47.549 [2024-07-14 09:44:31.873648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.549 [2024-07-14 09:44:31.873676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.549 qpair failed and we were unable to recover it. 00:34:47.549 [2024-07-14 09:44:31.873904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.549 [2024-07-14 09:44:31.873934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.549 qpair failed and we were unable to recover it. 00:34:47.549 [2024-07-14 09:44:31.874142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.549 [2024-07-14 09:44:31.874189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.549 qpair failed and we were unable to recover it. 00:34:47.549 [2024-07-14 09:44:31.874442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.549 [2024-07-14 09:44:31.874487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.549 qpair failed and we were unable to recover it. 00:34:47.549 [2024-07-14 09:44:31.874720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.549 [2024-07-14 09:44:31.874773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.549 qpair failed and we were unable to recover it. 00:34:47.549 [2024-07-14 09:44:31.874995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.549 [2024-07-14 09:44:31.875041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.549 qpair failed and we were unable to recover it. 00:34:47.549 [2024-07-14 09:44:31.875280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.549 [2024-07-14 09:44:31.875326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.549 qpair failed and we were unable to recover it. 00:34:47.549 [2024-07-14 09:44:31.875567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.549 [2024-07-14 09:44:31.875611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.549 qpair failed and we were unable to recover it. 00:34:47.549 [2024-07-14 09:44:31.875832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.549 [2024-07-14 09:44:31.875860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.549 qpair failed and we were unable to recover it. 00:34:47.549 [2024-07-14 09:44:31.876070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.549 [2024-07-14 09:44:31.876099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.549 qpair failed and we were unable to recover it. 00:34:47.549 [2024-07-14 09:44:31.876359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.549 [2024-07-14 09:44:31.876411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.549 qpair failed and we were unable to recover it. 00:34:47.549 [2024-07-14 09:44:31.876638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.549 [2024-07-14 09:44:31.876693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.549 qpair failed and we were unable to recover it. 00:34:47.550 [2024-07-14 09:44:31.876878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.550 [2024-07-14 09:44:31.876918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.550 qpair failed and we were unable to recover it. 00:34:47.550 [2024-07-14 09:44:31.877116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.550 [2024-07-14 09:44:31.877149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.550 qpair failed and we were unable to recover it. 00:34:47.550 [2024-07-14 09:44:31.877373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.550 [2024-07-14 09:44:31.877419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.550 qpair failed and we were unable to recover it. 00:34:47.550 [2024-07-14 09:44:31.877661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.550 [2024-07-14 09:44:31.877708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.550 qpair failed and we were unable to recover it. 00:34:47.550 [2024-07-14 09:44:31.877919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.550 [2024-07-14 09:44:31.877947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.550 qpair failed and we were unable to recover it. 00:34:47.550 [2024-07-14 09:44:31.878148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.550 [2024-07-14 09:44:31.878175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.550 qpair failed and we were unable to recover it. 00:34:47.550 [2024-07-14 09:44:31.878392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.550 [2024-07-14 09:44:31.878437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.550 qpair failed and we were unable to recover it. 00:34:47.550 [2024-07-14 09:44:31.878631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.550 [2024-07-14 09:44:31.878679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.550 qpair failed and we were unable to recover it. 00:34:47.550 [2024-07-14 09:44:31.878962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.550 [2024-07-14 09:44:31.879007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.550 qpair failed and we were unable to recover it. 00:34:47.550 [2024-07-14 09:44:31.879257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.550 [2024-07-14 09:44:31.879288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.550 qpair failed and we were unable to recover it. 00:34:47.550 [2024-07-14 09:44:31.879479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.550 [2024-07-14 09:44:31.879509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.550 qpair failed and we were unable to recover it. 00:34:47.550 [2024-07-14 09:44:31.879732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.550 [2024-07-14 09:44:31.879777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.550 qpair failed and we were unable to recover it. 00:34:47.550 [2024-07-14 09:44:31.879973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.550 [2024-07-14 09:44:31.880001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.550 qpair failed and we were unable to recover it. 00:34:47.550 [2024-07-14 09:44:31.880193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.550 [2024-07-14 09:44:31.880222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.550 qpair failed and we were unable to recover it. 00:34:47.550 [2024-07-14 09:44:31.880405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.550 [2024-07-14 09:44:31.880434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.550 qpair failed and we were unable to recover it. 00:34:47.550 [2024-07-14 09:44:31.880650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.550 [2024-07-14 09:44:31.880679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.550 qpair failed and we were unable to recover it. 00:34:47.550 [2024-07-14 09:44:31.880873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.550 [2024-07-14 09:44:31.880903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.550 qpair failed and we were unable to recover it. 00:34:47.550 [2024-07-14 09:44:31.881093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.550 [2024-07-14 09:44:31.881130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.550 qpair failed and we were unable to recover it. 00:34:47.550 [2024-07-14 09:44:31.881347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.550 [2024-07-14 09:44:31.881374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.550 qpair failed and we were unable to recover it. 00:34:47.550 [2024-07-14 09:44:31.881567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.550 [2024-07-14 09:44:31.881593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.550 qpair failed and we were unable to recover it. 00:34:47.550 [2024-07-14 09:44:31.881857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.550 [2024-07-14 09:44:31.881889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.550 qpair failed and we were unable to recover it. 00:34:47.550 [2024-07-14 09:44:31.882137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.550 [2024-07-14 09:44:31.882164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.550 qpair failed and we were unable to recover it. 00:34:47.550 [2024-07-14 09:44:31.882331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.550 [2024-07-14 09:44:31.882358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.550 qpair failed and we were unable to recover it. 00:34:47.550 [2024-07-14 09:44:31.882601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.550 [2024-07-14 09:44:31.882631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.550 qpair failed and we were unable to recover it. 00:34:47.550 [2024-07-14 09:44:31.882815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.550 [2024-07-14 09:44:31.882842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.550 qpair failed and we were unable to recover it. 00:34:47.550 [2024-07-14 09:44:31.883658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.550 [2024-07-14 09:44:31.883688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.550 qpair failed and we were unable to recover it. 00:34:47.550 [2024-07-14 09:44:31.883920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.550 [2024-07-14 09:44:31.883949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.550 qpair failed and we were unable to recover it. 00:34:47.550 [2024-07-14 09:44:31.884117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.550 [2024-07-14 09:44:31.884143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.550 qpair failed and we were unable to recover it. 00:34:47.550 [2024-07-14 09:44:31.884330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.550 [2024-07-14 09:44:31.884356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.550 qpair failed and we were unable to recover it. 00:34:47.550 [2024-07-14 09:44:31.884548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.550 [2024-07-14 09:44:31.884574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.550 qpair failed and we were unable to recover it. 00:34:47.550 [2024-07-14 09:44:31.884751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.550 [2024-07-14 09:44:31.884782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.550 qpair failed and we were unable to recover it. 00:34:47.550 [2024-07-14 09:44:31.884975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.550 [2024-07-14 09:44:31.885002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.550 qpair failed and we were unable to recover it. 00:34:47.550 [2024-07-14 09:44:31.885160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.550 [2024-07-14 09:44:31.885186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.550 qpair failed and we were unable to recover it. 00:34:47.551 [2024-07-14 09:44:31.885344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.551 [2024-07-14 09:44:31.885371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.551 qpair failed and we were unable to recover it. 00:34:47.551 [2024-07-14 09:44:31.885634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.551 [2024-07-14 09:44:31.885662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.551 qpair failed and we were unable to recover it. 00:34:47.551 [2024-07-14 09:44:31.885902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.551 [2024-07-14 09:44:31.885948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.551 qpair failed and we were unable to recover it. 00:34:47.551 [2024-07-14 09:44:31.886115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.551 [2024-07-14 09:44:31.886151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.551 qpair failed and we were unable to recover it. 00:34:47.551 [2024-07-14 09:44:31.886325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.551 [2024-07-14 09:44:31.886368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.551 qpair failed and we were unable to recover it. 00:34:47.551 [2024-07-14 09:44:31.886543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.551 [2024-07-14 09:44:31.886570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.551 qpair failed and we were unable to recover it. 00:34:47.551 [2024-07-14 09:44:31.886727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.551 [2024-07-14 09:44:31.886755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.551 qpair failed and we were unable to recover it. 00:34:47.551 [2024-07-14 09:44:31.886930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.551 [2024-07-14 09:44:31.886957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.551 qpair failed and we were unable to recover it. 00:34:47.551 [2024-07-14 09:44:31.887123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.551 [2024-07-14 09:44:31.887149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.551 qpair failed and we were unable to recover it. 00:34:47.551 [2024-07-14 09:44:31.887315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.551 [2024-07-14 09:44:31.887342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.551 qpair failed and we were unable to recover it. 00:34:47.551 [2024-07-14 09:44:31.887551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.551 [2024-07-14 09:44:31.887578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.551 qpair failed and we were unable to recover it. 00:34:47.551 [2024-07-14 09:44:31.887776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.551 [2024-07-14 09:44:31.887803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.551 qpair failed and we were unable to recover it. 00:34:47.551 [2024-07-14 09:44:31.887998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.551 [2024-07-14 09:44:31.888025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.551 qpair failed and we were unable to recover it. 00:34:47.551 [2024-07-14 09:44:31.888217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.551 [2024-07-14 09:44:31.888245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.551 qpair failed and we were unable to recover it. 00:34:47.551 [2024-07-14 09:44:31.888416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.551 [2024-07-14 09:44:31.888443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.551 qpair failed and we were unable to recover it. 00:34:47.551 [2024-07-14 09:44:31.888660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.551 [2024-07-14 09:44:31.888687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.551 qpair failed and we were unable to recover it. 00:34:47.551 [2024-07-14 09:44:31.888896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.551 [2024-07-14 09:44:31.888923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.551 qpair failed and we were unable to recover it. 00:34:47.551 [2024-07-14 09:44:31.889083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.551 [2024-07-14 09:44:31.889110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.551 qpair failed and we were unable to recover it. 00:34:47.551 [2024-07-14 09:44:31.889266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.551 [2024-07-14 09:44:31.889293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.551 qpair failed and we were unable to recover it. 00:34:47.551 [2024-07-14 09:44:31.889504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.551 [2024-07-14 09:44:31.889531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.551 qpair failed and we were unable to recover it. 00:34:47.551 [2024-07-14 09:44:31.889747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.551 [2024-07-14 09:44:31.889774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.551 qpair failed and we were unable to recover it. 00:34:47.551 [2024-07-14 09:44:31.889965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.551 [2024-07-14 09:44:31.889992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.551 qpair failed and we were unable to recover it. 00:34:47.551 [2024-07-14 09:44:31.890167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.551 [2024-07-14 09:44:31.890194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.551 qpair failed and we were unable to recover it. 00:34:47.551 [2024-07-14 09:44:31.890376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.551 [2024-07-14 09:44:31.890402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.551 qpair failed and we were unable to recover it. 00:34:47.551 [2024-07-14 09:44:31.890588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.551 [2024-07-14 09:44:31.890618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.551 qpair failed and we were unable to recover it. 00:34:47.551 [2024-07-14 09:44:31.890810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.551 [2024-07-14 09:44:31.890836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.551 qpair failed and we were unable to recover it. 00:34:47.551 [2024-07-14 09:44:31.891010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.551 [2024-07-14 09:44:31.891037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.551 qpair failed and we were unable to recover it. 00:34:47.551 [2024-07-14 09:44:31.891200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.551 [2024-07-14 09:44:31.891226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.551 qpair failed and we were unable to recover it. 00:34:47.551 [2024-07-14 09:44:31.891447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.551 [2024-07-14 09:44:31.891474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.551 qpair failed and we were unable to recover it. 00:34:47.551 [2024-07-14 09:44:31.891646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.551 [2024-07-14 09:44:31.891672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.552 qpair failed and we were unable to recover it. 00:34:47.552 [2024-07-14 09:44:31.891851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.552 [2024-07-14 09:44:31.891884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.552 qpair failed and we were unable to recover it. 00:34:47.552 [2024-07-14 09:44:31.892064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.552 [2024-07-14 09:44:31.892090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.552 qpair failed and we were unable to recover it. 00:34:47.552 [2024-07-14 09:44:31.892257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.552 [2024-07-14 09:44:31.892284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.552 qpair failed and we were unable to recover it. 00:34:47.552 [2024-07-14 09:44:31.892495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.552 [2024-07-14 09:44:31.892521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.552 qpair failed and we were unable to recover it. 00:34:47.552 [2024-07-14 09:44:31.892734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.552 [2024-07-14 09:44:31.892760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.552 qpair failed and we were unable to recover it. 00:34:47.552 [2024-07-14 09:44:31.892951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.552 [2024-07-14 09:44:31.892978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.552 qpair failed and we were unable to recover it. 00:34:47.552 [2024-07-14 09:44:31.893135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.552 [2024-07-14 09:44:31.893162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.552 qpair failed and we were unable to recover it. 00:34:47.552 [2024-07-14 09:44:31.893318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.552 [2024-07-14 09:44:31.893345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.552 qpair failed and we were unable to recover it. 00:34:47.552 [2024-07-14 09:44:31.893550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.552 [2024-07-14 09:44:31.893578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.552 qpair failed and we were unable to recover it. 00:34:47.552 [2024-07-14 09:44:31.893777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.552 [2024-07-14 09:44:31.893804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.552 qpair failed and we were unable to recover it. 00:34:47.552 [2024-07-14 09:44:31.893995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.552 [2024-07-14 09:44:31.894022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.552 qpair failed and we were unable to recover it. 00:34:47.552 [2024-07-14 09:44:31.894192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.552 [2024-07-14 09:44:31.894218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.552 qpair failed and we were unable to recover it. 00:34:47.552 [2024-07-14 09:44:31.894413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.552 [2024-07-14 09:44:31.894443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.552 qpair failed and we were unable to recover it. 00:34:47.552 [2024-07-14 09:44:31.894631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.552 [2024-07-14 09:44:31.894656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.552 qpair failed and we were unable to recover it. 00:34:47.552 [2024-07-14 09:44:31.894815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.552 [2024-07-14 09:44:31.894840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.552 qpair failed and we were unable to recover it. 00:34:47.552 [2024-07-14 09:44:31.895069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.552 [2024-07-14 09:44:31.895096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.552 qpair failed and we were unable to recover it. 00:34:47.552 [2024-07-14 09:44:31.895322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.552 [2024-07-14 09:44:31.895348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.552 qpair failed and we were unable to recover it. 00:34:47.552 [2024-07-14 09:44:31.895501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.552 [2024-07-14 09:44:31.895526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.552 qpair failed and we were unable to recover it. 00:34:47.552 [2024-07-14 09:44:31.895691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.552 [2024-07-14 09:44:31.895717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.552 qpair failed and we were unable to recover it. 00:34:47.552 [2024-07-14 09:44:31.895876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.552 [2024-07-14 09:44:31.895903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.552 qpair failed and we were unable to recover it. 00:34:47.552 [2024-07-14 09:44:31.896107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.552 [2024-07-14 09:44:31.896138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.552 qpair failed and we were unable to recover it. 00:34:47.552 [2024-07-14 09:44:31.896331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.552 [2024-07-14 09:44:31.896358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.552 qpair failed and we were unable to recover it. 00:34:47.552 [2024-07-14 09:44:31.896555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.552 [2024-07-14 09:44:31.896581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.552 qpair failed and we were unable to recover it. 00:34:47.552 [2024-07-14 09:44:31.896751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.552 [2024-07-14 09:44:31.896777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.552 qpair failed and we were unable to recover it. 00:34:47.552 [2024-07-14 09:44:31.896968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.552 [2024-07-14 09:44:31.896994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.552 qpair failed and we were unable to recover it. 00:34:47.552 [2024-07-14 09:44:31.897159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.552 [2024-07-14 09:44:31.897185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.552 qpair failed and we were unable to recover it. 00:34:47.552 [2024-07-14 09:44:31.897355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.552 [2024-07-14 09:44:31.897380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.552 qpair failed and we were unable to recover it. 00:34:47.552 [2024-07-14 09:44:31.897537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.552 [2024-07-14 09:44:31.897563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.552 qpair failed and we were unable to recover it. 00:34:47.552 [2024-07-14 09:44:31.897757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.552 [2024-07-14 09:44:31.897783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.552 qpair failed and we were unable to recover it. 00:34:47.552 [2024-07-14 09:44:31.897941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.552 [2024-07-14 09:44:31.897967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.552 qpair failed and we were unable to recover it. 00:34:47.552 [2024-07-14 09:44:31.898189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.552 [2024-07-14 09:44:31.898215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.552 qpair failed and we were unable to recover it. 00:34:47.552 [2024-07-14 09:44:31.898376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.552 [2024-07-14 09:44:31.898402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.552 qpair failed and we were unable to recover it. 00:34:47.552 [2024-07-14 09:44:31.898568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.553 [2024-07-14 09:44:31.898593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.553 qpair failed and we were unable to recover it. 00:34:47.553 [2024-07-14 09:44:31.898749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.553 [2024-07-14 09:44:31.898775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.553 qpair failed and we were unable to recover it. 00:34:47.553 [2024-07-14 09:44:31.898991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.553 [2024-07-14 09:44:31.899017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.553 qpair failed and we were unable to recover it. 00:34:47.553 [2024-07-14 09:44:31.899177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.553 [2024-07-14 09:44:31.899207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.553 qpair failed and we were unable to recover it. 00:34:47.553 [2024-07-14 09:44:31.899372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.553 [2024-07-14 09:44:31.899398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.553 qpair failed and we were unable to recover it. 00:34:47.553 [2024-07-14 09:44:31.899591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.553 [2024-07-14 09:44:31.899617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.553 qpair failed and we were unable to recover it. 00:34:47.553 [2024-07-14 09:44:31.899780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.553 [2024-07-14 09:44:31.899807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.553 qpair failed and we were unable to recover it. 00:34:47.553 [2024-07-14 09:44:31.899973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.553 [2024-07-14 09:44:31.899999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.553 qpair failed and we were unable to recover it. 00:34:47.553 [2024-07-14 09:44:31.900191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.553 [2024-07-14 09:44:31.900216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.553 qpair failed and we were unable to recover it. 00:34:47.553 [2024-07-14 09:44:31.900373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.553 [2024-07-14 09:44:31.900399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.553 qpair failed and we were unable to recover it. 00:34:47.553 [2024-07-14 09:44:31.900559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.553 [2024-07-14 09:44:31.900585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.553 qpair failed and we were unable to recover it. 00:34:47.553 [2024-07-14 09:44:31.900767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.553 [2024-07-14 09:44:31.900793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.553 qpair failed and we were unable to recover it. 00:34:47.553 [2024-07-14 09:44:31.900954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.553 [2024-07-14 09:44:31.900981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.553 qpair failed and we were unable to recover it. 00:34:47.553 [2024-07-14 09:44:31.901146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.553 [2024-07-14 09:44:31.901178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.553 qpair failed and we were unable to recover it. 00:34:47.553 [2024-07-14 09:44:31.901363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.553 [2024-07-14 09:44:31.901388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.553 qpair failed and we were unable to recover it. 00:34:47.553 [2024-07-14 09:44:31.901577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.553 [2024-07-14 09:44:31.901602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.553 qpair failed and we were unable to recover it. 00:34:47.553 [2024-07-14 09:44:31.901790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.553 [2024-07-14 09:44:31.901816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.553 qpair failed and we were unable to recover it. 00:34:47.553 [2024-07-14 09:44:31.902005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.553 [2024-07-14 09:44:31.902031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.553 qpair failed and we were unable to recover it. 00:34:47.553 [2024-07-14 09:44:31.902226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.553 [2024-07-14 09:44:31.902252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.553 qpair failed and we were unable to recover it. 00:34:47.553 [2024-07-14 09:44:31.902422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.553 [2024-07-14 09:44:31.902447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.553 qpair failed and we were unable to recover it. 00:34:47.553 [2024-07-14 09:44:31.902636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.553 [2024-07-14 09:44:31.902661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.553 qpair failed and we were unable to recover it. 00:34:47.553 [2024-07-14 09:44:31.902817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.553 [2024-07-14 09:44:31.902842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.553 qpair failed and we were unable to recover it. 00:34:47.553 [2024-07-14 09:44:31.903056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.553 [2024-07-14 09:44:31.903083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.553 qpair failed and we were unable to recover it. 00:34:47.553 [2024-07-14 09:44:31.903248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.553 [2024-07-14 09:44:31.903274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.553 qpair failed and we were unable to recover it. 00:34:47.553 [2024-07-14 09:44:31.903491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.553 [2024-07-14 09:44:31.903517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.553 qpair failed and we were unable to recover it. 00:34:47.553 [2024-07-14 09:44:31.903684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.553 [2024-07-14 09:44:31.903710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.553 qpair failed and we were unable to recover it. 00:34:47.553 [2024-07-14 09:44:31.903876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.553 [2024-07-14 09:44:31.903902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.553 qpair failed and we were unable to recover it. 00:34:47.553 [2024-07-14 09:44:31.904090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.553 [2024-07-14 09:44:31.904115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.553 qpair failed and we were unable to recover it. 00:34:47.553 [2024-07-14 09:44:31.904307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.553 [2024-07-14 09:44:31.904333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.553 qpair failed and we were unable to recover it. 00:34:47.553 [2024-07-14 09:44:31.904518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.553 [2024-07-14 09:44:31.904543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.553 qpair failed and we were unable to recover it. 00:34:47.553 [2024-07-14 09:44:31.904762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.554 [2024-07-14 09:44:31.904791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.554 qpair failed and we were unable to recover it. 00:34:47.554 [2024-07-14 09:44:31.904986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.554 [2024-07-14 09:44:31.905012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.554 qpair failed and we were unable to recover it. 00:34:47.554 [2024-07-14 09:44:31.905196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.554 [2024-07-14 09:44:31.905222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.554 qpair failed and we were unable to recover it. 00:34:47.554 [2024-07-14 09:44:31.905435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.554 [2024-07-14 09:44:31.905459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.554 qpair failed and we were unable to recover it. 00:34:47.554 [2024-07-14 09:44:31.905624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.554 [2024-07-14 09:44:31.905650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.554 qpair failed and we were unable to recover it. 00:34:47.554 [2024-07-14 09:44:31.905811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.554 [2024-07-14 09:44:31.905836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.554 qpair failed and we were unable to recover it. 00:34:47.554 [2024-07-14 09:44:31.906032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.554 [2024-07-14 09:44:31.906058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.554 qpair failed and we were unable to recover it. 00:34:47.554 [2024-07-14 09:44:31.906246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.554 [2024-07-14 09:44:31.906272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.554 qpair failed and we were unable to recover it. 00:34:47.554 [2024-07-14 09:44:31.906467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.554 [2024-07-14 09:44:31.906492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.554 qpair failed and we were unable to recover it. 00:34:47.554 [2024-07-14 09:44:31.906682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.554 [2024-07-14 09:44:31.906708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.554 qpair failed and we were unable to recover it. 00:34:47.554 [2024-07-14 09:44:31.906924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.554 [2024-07-14 09:44:31.906951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.554 qpair failed and we were unable to recover it. 00:34:47.554 [2024-07-14 09:44:31.907111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.554 [2024-07-14 09:44:31.907136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.554 qpair failed and we were unable to recover it. 00:34:47.554 [2024-07-14 09:44:31.907301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.554 [2024-07-14 09:44:31.907327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.554 qpair failed and we were unable to recover it. 00:34:47.554 [2024-07-14 09:44:31.907547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.554 [2024-07-14 09:44:31.907573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.554 qpair failed and we were unable to recover it. 00:34:47.554 [2024-07-14 09:44:31.907768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.554 [2024-07-14 09:44:31.907794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.554 qpair failed and we were unable to recover it. 00:34:47.554 [2024-07-14 09:44:31.907958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.554 [2024-07-14 09:44:31.907983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.554 qpair failed and we were unable to recover it. 00:34:47.554 [2024-07-14 09:44:31.908145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.554 [2024-07-14 09:44:31.908171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.554 qpair failed and we were unable to recover it. 00:34:47.554 [2024-07-14 09:44:31.908340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.554 [2024-07-14 09:44:31.908365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.554 qpair failed and we were unable to recover it. 00:34:47.554 [2024-07-14 09:44:31.908581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.554 [2024-07-14 09:44:31.908606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.554 qpair failed and we were unable to recover it. 00:34:47.554 [2024-07-14 09:44:31.908804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.554 [2024-07-14 09:44:31.908829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.554 qpair failed and we were unable to recover it. 00:34:47.554 [2024-07-14 09:44:31.908993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.554 [2024-07-14 09:44:31.909019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.554 qpair failed and we were unable to recover it. 00:34:47.554 [2024-07-14 09:44:31.909207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.554 [2024-07-14 09:44:31.909232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.554 qpair failed and we were unable to recover it. 00:34:47.554 [2024-07-14 09:44:31.909452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.554 [2024-07-14 09:44:31.909477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.554 qpair failed and we were unable to recover it. 00:34:47.554 [2024-07-14 09:44:31.909660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.554 [2024-07-14 09:44:31.909689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.554 qpair failed and we were unable to recover it. 00:34:47.554 [2024-07-14 09:44:31.909893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.554 [2024-07-14 09:44:31.909924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.554 qpair failed and we were unable to recover it. 00:34:47.554 [2024-07-14 09:44:31.910111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.554 [2024-07-14 09:44:31.910137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.554 qpair failed and we were unable to recover it. 00:34:47.554 [2024-07-14 09:44:31.910324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.554 [2024-07-14 09:44:31.910349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.554 qpair failed and we were unable to recover it. 00:34:47.554 [2024-07-14 09:44:31.910552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.554 [2024-07-14 09:44:31.910580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.554 qpair failed and we were unable to recover it. 00:34:47.554 [2024-07-14 09:44:31.910767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.554 [2024-07-14 09:44:31.910795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.554 qpair failed and we were unable to recover it. 00:34:47.554 [2024-07-14 09:44:31.910990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.554 [2024-07-14 09:44:31.911016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.554 qpair failed and we were unable to recover it. 00:34:47.554 [2024-07-14 09:44:31.911178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.555 [2024-07-14 09:44:31.911203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.555 qpair failed and we were unable to recover it. 00:34:47.555 [2024-07-14 09:44:31.911383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.555 [2024-07-14 09:44:31.911412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.555 qpair failed and we were unable to recover it. 00:34:47.555 [2024-07-14 09:44:31.911611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.555 [2024-07-14 09:44:31.911639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.555 qpair failed and we were unable to recover it. 00:34:47.555 [2024-07-14 09:44:31.911838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.555 [2024-07-14 09:44:31.911873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.555 qpair failed and we were unable to recover it. 00:34:47.555 [2024-07-14 09:44:31.912084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.555 [2024-07-14 09:44:31.912109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.555 qpair failed and we were unable to recover it. 00:34:47.555 [2024-07-14 09:44:31.912286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.555 [2024-07-14 09:44:31.912328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.555 qpair failed and we were unable to recover it. 00:34:47.555 [2024-07-14 09:44:31.912510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.555 [2024-07-14 09:44:31.912539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.555 qpair failed and we were unable to recover it. 00:34:47.555 [2024-07-14 09:44:31.912772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.555 [2024-07-14 09:44:31.912814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.555 qpair failed and we were unable to recover it. 00:34:47.555 [2024-07-14 09:44:31.913023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.555 [2024-07-14 09:44:31.913051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.555 qpair failed and we were unable to recover it. 00:34:47.555 [2024-07-14 09:44:31.913256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.555 [2024-07-14 09:44:31.913285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.555 qpair failed and we were unable to recover it. 00:34:47.555 [2024-07-14 09:44:31.913474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.555 [2024-07-14 09:44:31.913516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.555 qpair failed and we were unable to recover it. 00:34:47.555 [2024-07-14 09:44:31.913749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.555 [2024-07-14 09:44:31.913782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.555 qpair failed and we were unable to recover it. 00:34:47.556 [2024-07-14 09:44:31.914004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.556 [2024-07-14 09:44:31.914031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.556 qpair failed and we were unable to recover it. 00:34:47.556 [2024-07-14 09:44:31.914232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.556 [2024-07-14 09:44:31.914258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.556 qpair failed and we were unable to recover it. 00:34:47.556 [2024-07-14 09:44:31.914443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.556 [2024-07-14 09:44:31.914468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.556 qpair failed and we were unable to recover it. 00:34:47.556 [2024-07-14 09:44:31.914684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.556 [2024-07-14 09:44:31.914713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.556 qpair failed and we were unable to recover it. 00:34:47.556 [2024-07-14 09:44:31.914943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.556 [2024-07-14 09:44:31.914969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.556 qpair failed and we were unable to recover it. 00:34:47.556 [2024-07-14 09:44:31.915162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.556 [2024-07-14 09:44:31.915190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.556 qpair failed and we were unable to recover it. 00:34:47.556 [2024-07-14 09:44:31.915373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.556 [2024-07-14 09:44:31.915398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.556 qpair failed and we were unable to recover it. 00:34:47.556 [2024-07-14 09:44:31.915596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.556 [2024-07-14 09:44:31.915621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.556 qpair failed and we were unable to recover it. 00:34:47.556 [2024-07-14 09:44:31.915811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.556 [2024-07-14 09:44:31.915836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.556 qpair failed and we were unable to recover it. 00:34:47.556 [2024-07-14 09:44:31.916010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.556 [2024-07-14 09:44:31.916037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.556 qpair failed and we were unable to recover it. 00:34:47.556 [2024-07-14 09:44:31.916264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.556 [2024-07-14 09:44:31.916291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.556 qpair failed and we were unable to recover it. 00:34:47.556 [2024-07-14 09:44:31.916483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.556 [2024-07-14 09:44:31.916508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.556 qpair failed and we were unable to recover it. 00:34:47.556 [2024-07-14 09:44:31.916691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.556 [2024-07-14 09:44:31.916719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.556 qpair failed and we were unable to recover it. 00:34:47.556 [2024-07-14 09:44:31.916922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.556 [2024-07-14 09:44:31.916948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.556 qpair failed and we were unable to recover it. 00:34:47.556 [2024-07-14 09:44:31.917167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.556 [2024-07-14 09:44:31.917192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.556 qpair failed and we were unable to recover it. 00:34:47.556 [2024-07-14 09:44:31.917379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.556 [2024-07-14 09:44:31.917404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.556 qpair failed and we were unable to recover it. 00:34:47.556 [2024-07-14 09:44:31.917622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.556 [2024-07-14 09:44:31.917647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.556 qpair failed and we were unable to recover it. 00:34:47.556 [2024-07-14 09:44:31.917805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.556 [2024-07-14 09:44:31.917830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.556 qpair failed and we were unable to recover it. 00:34:47.556 [2024-07-14 09:44:31.918026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.556 [2024-07-14 09:44:31.918052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.556 qpair failed and we were unable to recover it. 00:34:47.556 [2024-07-14 09:44:31.918228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.556 [2024-07-14 09:44:31.918254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.556 qpair failed and we were unable to recover it. 00:34:47.556 [2024-07-14 09:44:31.918411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.556 [2024-07-14 09:44:31.918436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.556 qpair failed and we were unable to recover it. 00:34:47.556 [2024-07-14 09:44:31.918624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.556 [2024-07-14 09:44:31.918651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.556 qpair failed and we were unable to recover it. 00:34:47.556 [2024-07-14 09:44:31.918841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.556 [2024-07-14 09:44:31.918874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.556 qpair failed and we were unable to recover it. 00:34:47.556 [2024-07-14 09:44:31.919064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.556 [2024-07-14 09:44:31.919089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.557 qpair failed and we were unable to recover it. 00:34:47.557 [2024-07-14 09:44:31.919300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.557 [2024-07-14 09:44:31.919328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.557 qpair failed and we were unable to recover it. 00:34:47.557 [2024-07-14 09:44:31.919543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.557 [2024-07-14 09:44:31.919571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.557 qpair failed and we were unable to recover it. 00:34:47.557 [2024-07-14 09:44:31.919779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.557 [2024-07-14 09:44:31.919807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.557 qpair failed and we were unable to recover it. 00:34:47.557 [2024-07-14 09:44:31.920019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.557 [2024-07-14 09:44:31.920046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.557 qpair failed and we were unable to recover it. 00:34:47.557 [2024-07-14 09:44:31.920222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.557 [2024-07-14 09:44:31.920247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.557 qpair failed and we were unable to recover it. 00:34:47.557 [2024-07-14 09:44:31.920436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.557 [2024-07-14 09:44:31.920462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.557 qpair failed and we were unable to recover it. 00:34:47.557 [2024-07-14 09:44:31.920666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.557 [2024-07-14 09:44:31.920694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.557 qpair failed and we were unable to recover it. 00:34:47.557 [2024-07-14 09:44:31.920921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.557 [2024-07-14 09:44:31.920947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.557 qpair failed and we were unable to recover it. 00:34:47.557 [2024-07-14 09:44:31.921108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.557 [2024-07-14 09:44:31.921142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.557 qpair failed and we were unable to recover it. 00:34:47.557 [2024-07-14 09:44:31.921295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.557 [2024-07-14 09:44:31.921320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.557 qpair failed and we were unable to recover it. 00:34:47.557 [2024-07-14 09:44:31.921476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.557 [2024-07-14 09:44:31.921501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.557 qpair failed and we were unable to recover it. 00:34:47.557 [2024-07-14 09:44:31.921687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.557 [2024-07-14 09:44:31.921712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.557 qpair failed and we were unable to recover it. 00:34:47.557 [2024-07-14 09:44:31.921876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.557 [2024-07-14 09:44:31.921902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.557 qpair failed and we were unable to recover it. 00:34:47.557 [2024-07-14 09:44:31.922128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.557 [2024-07-14 09:44:31.922162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.557 qpair failed and we were unable to recover it. 00:34:47.557 [2024-07-14 09:44:31.922326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.557 [2024-07-14 09:44:31.922352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.557 qpair failed and we were unable to recover it. 00:34:47.557 [2024-07-14 09:44:31.922567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.557 [2024-07-14 09:44:31.922595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.557 qpair failed and we were unable to recover it. 00:34:47.557 [2024-07-14 09:44:31.922837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.557 [2024-07-14 09:44:31.922872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.557 qpair failed and we were unable to recover it. 00:34:47.557 [2024-07-14 09:44:31.923091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.557 [2024-07-14 09:44:31.923117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.557 qpair failed and we were unable to recover it. 00:34:47.557 [2024-07-14 09:44:31.923275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.557 [2024-07-14 09:44:31.923300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.557 qpair failed and we were unable to recover it. 00:34:47.557 [2024-07-14 09:44:31.923487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.557 [2024-07-14 09:44:31.923513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.557 qpair failed and we were unable to recover it. 00:34:47.557 [2024-07-14 09:44:31.923672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.557 [2024-07-14 09:44:31.923697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.557 qpair failed and we were unable to recover it. 00:34:47.557 [2024-07-14 09:44:31.923931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.557 [2024-07-14 09:44:31.923957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.557 qpair failed and we were unable to recover it. 00:34:47.557 [2024-07-14 09:44:31.924178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.557 [2024-07-14 09:44:31.924204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.557 qpair failed and we were unable to recover it. 00:34:47.557 [2024-07-14 09:44:31.924392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.557 [2024-07-14 09:44:31.924417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.557 qpair failed and we were unable to recover it. 00:34:47.557 [2024-07-14 09:44:31.924620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.558 [2024-07-14 09:44:31.924648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.558 qpair failed and we were unable to recover it. 00:34:47.558 [2024-07-14 09:44:31.924825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.558 [2024-07-14 09:44:31.924851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.558 qpair failed and we were unable to recover it. 00:34:47.558 [2024-07-14 09:44:31.925063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.558 [2024-07-14 09:44:31.925090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.558 qpair failed and we were unable to recover it. 00:34:47.558 [2024-07-14 09:44:31.925304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.558 [2024-07-14 09:44:31.925329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.558 qpair failed and we were unable to recover it. 00:34:47.558 [2024-07-14 09:44:31.925482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.558 [2024-07-14 09:44:31.925508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.558 qpair failed and we were unable to recover it. 00:34:47.558 [2024-07-14 09:44:31.925693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.558 [2024-07-14 09:44:31.925718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.558 qpair failed and we were unable to recover it. 00:34:47.558 [2024-07-14 09:44:31.925903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.558 [2024-07-14 09:44:31.925935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.558 qpair failed and we were unable to recover it. 00:34:47.558 [2024-07-14 09:44:31.926101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.558 [2024-07-14 09:44:31.926127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.558 qpair failed and we were unable to recover it. 00:34:47.558 [2024-07-14 09:44:31.926297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.558 [2024-07-14 09:44:31.926323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.558 qpair failed and we were unable to recover it. 00:34:47.558 [2024-07-14 09:44:31.926506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.558 [2024-07-14 09:44:31.926534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.558 qpair failed and we were unable to recover it. 00:34:47.558 [2024-07-14 09:44:31.926710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.558 [2024-07-14 09:44:31.926739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.558 qpair failed and we were unable to recover it. 00:34:47.558 [2024-07-14 09:44:31.926979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.558 [2024-07-14 09:44:31.927005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.558 qpair failed and we were unable to recover it. 00:34:47.558 [2024-07-14 09:44:31.927187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.558 [2024-07-14 09:44:31.927213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.558 qpair failed and we were unable to recover it. 00:34:47.558 [2024-07-14 09:44:31.927398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.558 [2024-07-14 09:44:31.927424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.558 qpair failed and we were unable to recover it. 00:34:47.558 [2024-07-14 09:44:31.927607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.558 [2024-07-14 09:44:31.927632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.558 qpair failed and we were unable to recover it. 00:34:47.558 [2024-07-14 09:44:31.927824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.558 [2024-07-14 09:44:31.927852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.558 qpair failed and we were unable to recover it. 00:34:47.558 [2024-07-14 09:44:31.928065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.558 [2024-07-14 09:44:31.928092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.558 qpair failed and we were unable to recover it. 00:34:47.558 [2024-07-14 09:44:31.928283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.558 [2024-07-14 09:44:31.928308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.558 qpair failed and we were unable to recover it. 00:34:47.558 [2024-07-14 09:44:31.928493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.558 [2024-07-14 09:44:31.928521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.558 qpair failed and we were unable to recover it. 00:34:47.558 [2024-07-14 09:44:31.928733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.558 [2024-07-14 09:44:31.928763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.558 qpair failed and we were unable to recover it. 00:34:47.558 [2024-07-14 09:44:31.928954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.558 [2024-07-14 09:44:31.928981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.558 qpair failed and we were unable to recover it. 00:34:47.558 [2024-07-14 09:44:31.929165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.558 [2024-07-14 09:44:31.929191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.558 qpair failed and we were unable to recover it. 00:34:47.558 [2024-07-14 09:44:31.929404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.558 [2024-07-14 09:44:31.929433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.558 qpair failed and we were unable to recover it. 00:34:47.558 [2024-07-14 09:44:31.929643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.558 [2024-07-14 09:44:31.929671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.559 qpair failed and we were unable to recover it. 00:34:47.559 [2024-07-14 09:44:31.929919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.559 [2024-07-14 09:44:31.929945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.559 qpair failed and we were unable to recover it. 00:34:47.559 [2024-07-14 09:44:31.930165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.559 [2024-07-14 09:44:31.930193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.559 qpair failed and we were unable to recover it. 00:34:47.559 [2024-07-14 09:44:31.930433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.559 [2024-07-14 09:44:31.930459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.559 qpair failed and we were unable to recover it. 00:34:47.559 [2024-07-14 09:44:31.930673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.559 [2024-07-14 09:44:31.930699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.559 qpair failed and we were unable to recover it. 00:34:47.559 [2024-07-14 09:44:31.930924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.559 [2024-07-14 09:44:31.930951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.559 qpair failed and we were unable to recover it. 00:34:47.559 [2024-07-14 09:44:31.931122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.559 [2024-07-14 09:44:31.931165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.559 qpair failed and we were unable to recover it. 00:34:47.559 [2024-07-14 09:44:31.931402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.559 [2024-07-14 09:44:31.931428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.559 qpair failed and we were unable to recover it. 00:34:47.559 [2024-07-14 09:44:31.931653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.559 [2024-07-14 09:44:31.931682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.559 qpair failed and we were unable to recover it. 00:34:47.559 [2024-07-14 09:44:31.931935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.559 [2024-07-14 09:44:31.931962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.559 qpair failed and we were unable to recover it. 00:34:47.559 [2024-07-14 09:44:31.932145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.559 [2024-07-14 09:44:31.932170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.559 qpair failed and we were unable to recover it. 00:34:47.559 [2024-07-14 09:44:31.932423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.559 [2024-07-14 09:44:31.932452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.559 qpair failed and we were unable to recover it. 00:34:47.559 [2024-07-14 09:44:31.932658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.559 [2024-07-14 09:44:31.932686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.559 qpair failed and we were unable to recover it. 00:34:47.559 [2024-07-14 09:44:31.932895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.559 [2024-07-14 09:44:31.932922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.559 qpair failed and we were unable to recover it. 00:34:47.559 [2024-07-14 09:44:31.933127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.559 [2024-07-14 09:44:31.933153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.559 qpair failed and we were unable to recover it. 00:34:47.559 [2024-07-14 09:44:31.933392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.559 [2024-07-14 09:44:31.933420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.559 qpair failed and we were unable to recover it. 00:34:47.559 [2024-07-14 09:44:31.933650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.559 [2024-07-14 09:44:31.933675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.559 qpair failed and we were unable to recover it. 00:34:47.559 [2024-07-14 09:44:31.933862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.559 [2024-07-14 09:44:31.933896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.559 qpair failed and we were unable to recover it. 00:34:47.559 [2024-07-14 09:44:31.934111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.559 [2024-07-14 09:44:31.934137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.559 qpair failed and we were unable to recover it. 00:34:47.559 [2024-07-14 09:44:31.934311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.559 [2024-07-14 09:44:31.934336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.559 qpair failed and we were unable to recover it. 00:34:47.559 [2024-07-14 09:44:31.934563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.559 [2024-07-14 09:44:31.934597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.559 qpair failed and we were unable to recover it. 00:34:47.559 [2024-07-14 09:44:31.934762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.559 [2024-07-14 09:44:31.934788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.559 qpair failed and we were unable to recover it. 00:34:47.559 [2024-07-14 09:44:31.935070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.559 [2024-07-14 09:44:31.935096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.559 qpair failed and we were unable to recover it. 00:34:47.559 [2024-07-14 09:44:31.935320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.559 [2024-07-14 09:44:31.935349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.559 qpair failed and we were unable to recover it. 00:34:47.559 [2024-07-14 09:44:31.935586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.559 [2024-07-14 09:44:31.935615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.559 qpair failed and we were unable to recover it. 00:34:47.559 [2024-07-14 09:44:31.935832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.560 [2024-07-14 09:44:31.935858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.560 qpair failed and we were unable to recover it. 00:34:47.560 [2024-07-14 09:44:31.936059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.560 [2024-07-14 09:44:31.936084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.560 qpair failed and we were unable to recover it. 00:34:47.560 [2024-07-14 09:44:31.936295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.560 [2024-07-14 09:44:31.936324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.560 qpair failed and we were unable to recover it. 00:34:47.560 [2024-07-14 09:44:31.936523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.560 [2024-07-14 09:44:31.936548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.560 qpair failed and we were unable to recover it. 00:34:47.560 [2024-07-14 09:44:31.936775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.560 [2024-07-14 09:44:31.936821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.560 qpair failed and we were unable to recover it. 00:34:47.560 [2024-07-14 09:44:31.937024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.560 [2024-07-14 09:44:31.937051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.560 qpair failed and we were unable to recover it. 00:34:47.560 [2024-07-14 09:44:31.937245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.560 [2024-07-14 09:44:31.937270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.560 qpair failed and we were unable to recover it. 00:34:47.560 [2024-07-14 09:44:31.937491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.560 [2024-07-14 09:44:31.937520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.560 qpair failed and we were unable to recover it. 00:34:47.560 [2024-07-14 09:44:31.937762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.560 [2024-07-14 09:44:31.937788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.560 qpair failed and we were unable to recover it. 00:34:47.560 [2024-07-14 09:44:31.937973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.560 [2024-07-14 09:44:31.938000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.560 qpair failed and we were unable to recover it. 00:34:47.560 [2024-07-14 09:44:31.938210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.560 [2024-07-14 09:44:31.938239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.560 qpair failed and we were unable to recover it. 00:34:47.560 [2024-07-14 09:44:31.938471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.560 [2024-07-14 09:44:31.938499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.560 qpair failed and we were unable to recover it. 00:34:47.560 [2024-07-14 09:44:31.938713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.560 [2024-07-14 09:44:31.938742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.560 qpair failed and we were unable to recover it. 00:34:47.560 [2024-07-14 09:44:31.938979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.560 [2024-07-14 09:44:31.939009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.560 qpair failed and we were unable to recover it. 00:34:47.560 [2024-07-14 09:44:31.939183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.560 [2024-07-14 09:44:31.939211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.560 qpair failed and we were unable to recover it. 00:34:47.560 [2024-07-14 09:44:31.939414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.560 [2024-07-14 09:44:31.939439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.560 qpair failed and we were unable to recover it. 00:34:47.560 [2024-07-14 09:44:31.939652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.560 [2024-07-14 09:44:31.939681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.560 qpair failed and we were unable to recover it. 00:34:47.560 [2024-07-14 09:44:31.939889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.560 [2024-07-14 09:44:31.939923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.560 qpair failed and we were unable to recover it. 00:34:47.560 [2024-07-14 09:44:31.940116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.560 [2024-07-14 09:44:31.940151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.560 qpair failed and we were unable to recover it. 00:34:47.560 [2024-07-14 09:44:31.940391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.560 [2024-07-14 09:44:31.940420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.560 qpair failed and we were unable to recover it. 00:34:47.560 [2024-07-14 09:44:31.940636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.560 [2024-07-14 09:44:31.940661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.560 qpair failed and we were unable to recover it. 00:34:47.560 [2024-07-14 09:44:31.940856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.560 [2024-07-14 09:44:31.940887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.560 qpair failed and we were unable to recover it. 00:34:47.560 [2024-07-14 09:44:31.941080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.560 [2024-07-14 09:44:31.941106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.560 qpair failed and we were unable to recover it. 00:34:47.560 [2024-07-14 09:44:31.941299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.560 [2024-07-14 09:44:31.941327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.560 qpair failed and we were unable to recover it. 00:34:47.560 [2024-07-14 09:44:31.941566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.560 [2024-07-14 09:44:31.941592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.560 qpair failed and we were unable to recover it. 00:34:47.560 [2024-07-14 09:44:31.941817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.561 [2024-07-14 09:44:31.941843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.561 qpair failed and we were unable to recover it. 00:34:47.561 [2024-07-14 09:44:31.942056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.561 [2024-07-14 09:44:31.942082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.561 qpair failed and we were unable to recover it. 00:34:47.561 [2024-07-14 09:44:31.942298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.561 [2024-07-14 09:44:31.942323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.561 qpair failed and we were unable to recover it. 00:34:47.561 [2024-07-14 09:44:31.942545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.561 [2024-07-14 09:44:31.942574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.561 qpair failed and we were unable to recover it. 00:34:47.561 [2024-07-14 09:44:31.942761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.561 [2024-07-14 09:44:31.942790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.561 qpair failed and we were unable to recover it. 00:34:47.561 [2024-07-14 09:44:31.942977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.561 [2024-07-14 09:44:31.943003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.561 qpair failed and we were unable to recover it. 00:34:47.561 [2024-07-14 09:44:31.943172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.561 [2024-07-14 09:44:31.943198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.561 qpair failed and we were unable to recover it. 00:34:47.561 [2024-07-14 09:44:31.943419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.561 [2024-07-14 09:44:31.943448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.561 qpair failed and we were unable to recover it. 00:34:47.561 [2024-07-14 09:44:31.943651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.561 [2024-07-14 09:44:31.943676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.561 qpair failed and we were unable to recover it. 00:34:47.561 [2024-07-14 09:44:31.943862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.561 [2024-07-14 09:44:31.943896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.561 qpair failed and we were unable to recover it. 00:34:47.561 [2024-07-14 09:44:31.944115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.561 [2024-07-14 09:44:31.944151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.561 qpair failed and we were unable to recover it. 00:34:47.561 [2024-07-14 09:44:31.944366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.561 [2024-07-14 09:44:31.944391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.561 qpair failed and we were unable to recover it. 00:34:47.561 [2024-07-14 09:44:31.944587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.561 [2024-07-14 09:44:31.944613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.561 qpair failed and we were unable to recover it. 00:34:47.561 [2024-07-14 09:44:31.944830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.561 [2024-07-14 09:44:31.944858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.561 qpair failed and we were unable to recover it. 00:34:47.561 [2024-07-14 09:44:31.945073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.561 [2024-07-14 09:44:31.945102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.561 qpair failed and we were unable to recover it. 00:34:47.561 [2024-07-14 09:44:31.945316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.561 [2024-07-14 09:44:31.945344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.561 qpair failed and we were unable to recover it. 00:34:47.561 [2024-07-14 09:44:31.945559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.561 [2024-07-14 09:44:31.945587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.561 qpair failed and we were unable to recover it. 00:34:47.561 [2024-07-14 09:44:31.945800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.561 [2024-07-14 09:44:31.945825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.561 qpair failed and we were unable to recover it. 00:34:47.561 [2024-07-14 09:44:31.946021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.561 [2024-07-14 09:44:31.946047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.561 qpair failed and we were unable to recover it. 00:34:47.561 [2024-07-14 09:44:31.946244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.561 [2024-07-14 09:44:31.946273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.561 qpair failed and we were unable to recover it. 00:34:47.561 [2024-07-14 09:44:31.946460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.561 [2024-07-14 09:44:31.946486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.561 qpair failed and we were unable to recover it. 00:34:47.561 [2024-07-14 09:44:31.946647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.561 [2024-07-14 09:44:31.946690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.561 qpair failed and we were unable to recover it. 00:34:47.561 [2024-07-14 09:44:31.946901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.561 [2024-07-14 09:44:31.946940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.561 qpair failed and we were unable to recover it. 00:34:47.561 [2024-07-14 09:44:31.947123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.561 [2024-07-14 09:44:31.947152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.561 qpair failed and we were unable to recover it. 00:34:47.561 [2024-07-14 09:44:31.947374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.561 [2024-07-14 09:44:31.947403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.561 qpair failed and we were unable to recover it. 00:34:47.561 [2024-07-14 09:44:31.947637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.561 [2024-07-14 09:44:31.947665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.561 qpair failed and we were unable to recover it. 00:34:47.561 [2024-07-14 09:44:31.947852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.561 [2024-07-14 09:44:31.947885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.561 qpair failed and we were unable to recover it. 00:34:47.561 [2024-07-14 09:44:31.948096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.561 [2024-07-14 09:44:31.948121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.561 qpair failed and we were unable to recover it. 00:34:47.561 [2024-07-14 09:44:31.948336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.562 [2024-07-14 09:44:31.948364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.562 qpair failed and we were unable to recover it. 00:34:47.562 [2024-07-14 09:44:31.948572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.562 [2024-07-14 09:44:31.948597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.562 qpair failed and we were unable to recover it. 00:34:47.562 [2024-07-14 09:44:31.948818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.562 [2024-07-14 09:44:31.948846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.562 qpair failed and we were unable to recover it. 00:34:47.562 [2024-07-14 09:44:31.949081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.562 [2024-07-14 09:44:31.949106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.562 qpair failed and we were unable to recover it. 00:34:47.562 [2024-07-14 09:44:31.949303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.562 [2024-07-14 09:44:31.949328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.562 qpair failed and we were unable to recover it. 00:34:47.562 [2024-07-14 09:44:31.949539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.562 [2024-07-14 09:44:31.949567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.562 qpair failed and we were unable to recover it. 00:34:47.562 [2024-07-14 09:44:31.949798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.562 [2024-07-14 09:44:31.949827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.562 qpair failed and we were unable to recover it. 00:34:47.562 [2024-07-14 09:44:31.950009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.562 [2024-07-14 09:44:31.950034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.562 qpair failed and we were unable to recover it. 00:34:47.562 [2024-07-14 09:44:31.950206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.562 [2024-07-14 09:44:31.950231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.562 qpair failed and we were unable to recover it. 00:34:47.562 [2024-07-14 09:44:31.950444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.562 [2024-07-14 09:44:31.950472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.562 qpair failed and we were unable to recover it. 00:34:47.562 [2024-07-14 09:44:31.950705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.562 [2024-07-14 09:44:31.950731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.562 qpair failed and we were unable to recover it. 00:34:47.562 [2024-07-14 09:44:31.950904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.562 [2024-07-14 09:44:31.950940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.562 qpair failed and we were unable to recover it. 00:34:47.562 [2024-07-14 09:44:31.951121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.562 [2024-07-14 09:44:31.951151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.562 qpair failed and we were unable to recover it. 00:34:47.562 [2024-07-14 09:44:31.951356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.562 [2024-07-14 09:44:31.951381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.562 qpair failed and we were unable to recover it. 00:34:47.562 [2024-07-14 09:44:31.951635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.562 [2024-07-14 09:44:31.951664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.562 qpair failed and we were unable to recover it. 00:34:47.562 [2024-07-14 09:44:31.951871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.562 [2024-07-14 09:44:31.951908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.562 qpair failed and we were unable to recover it. 00:34:47.562 [2024-07-14 09:44:31.952086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.562 [2024-07-14 09:44:31.952112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.562 qpair failed and we were unable to recover it. 00:34:47.562 [2024-07-14 09:44:31.952355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.562 [2024-07-14 09:44:31.952384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.562 qpair failed and we were unable to recover it. 00:34:47.562 [2024-07-14 09:44:31.952576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.562 [2024-07-14 09:44:31.952601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.562 qpair failed and we were unable to recover it. 00:34:47.562 [2024-07-14 09:44:31.952826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.563 [2024-07-14 09:44:31.952852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.563 qpair failed and we were unable to recover it. 00:34:47.563 [2024-07-14 09:44:31.953098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.563 [2024-07-14 09:44:31.953124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.563 qpair failed and we were unable to recover it. 00:34:47.563 [2024-07-14 09:44:31.953354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.563 [2024-07-14 09:44:31.953382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.563 qpair failed and we were unable to recover it. 00:34:47.563 [2024-07-14 09:44:31.953622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.563 [2024-07-14 09:44:31.953647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.563 qpair failed and we were unable to recover it. 00:34:47.563 [2024-07-14 09:44:31.953857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.563 [2024-07-14 09:44:31.953894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.563 qpair failed and we were unable to recover it. 00:34:47.563 [2024-07-14 09:44:31.954107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.563 [2024-07-14 09:44:31.954140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.563 qpair failed and we were unable to recover it. 00:34:47.563 [2024-07-14 09:44:31.954354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.563 [2024-07-14 09:44:31.954379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.563 qpair failed and we were unable to recover it. 00:34:47.563 [2024-07-14 09:44:31.954595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.563 [2024-07-14 09:44:31.954624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.563 qpair failed and we were unable to recover it. 00:34:47.563 [2024-07-14 09:44:31.954839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.563 [2024-07-14 09:44:31.954875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.563 qpair failed and we were unable to recover it. 00:34:47.563 [2024-07-14 09:44:31.955042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.563 [2024-07-14 09:44:31.955067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.563 qpair failed and we were unable to recover it. 00:34:47.563 [2024-07-14 09:44:31.955233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.563 [2024-07-14 09:44:31.955259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.563 qpair failed and we were unable to recover it. 00:34:47.563 [2024-07-14 09:44:31.955470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.563 [2024-07-14 09:44:31.955498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.563 qpair failed and we were unable to recover it. 00:34:47.563 [2024-07-14 09:44:31.955735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.563 [2024-07-14 09:44:31.955761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.563 qpair failed and we were unable to recover it. 00:34:47.563 [2024-07-14 09:44:31.955924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.563 [2024-07-14 09:44:31.955950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.563 qpair failed and we were unable to recover it. 00:34:47.563 [2024-07-14 09:44:31.956144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.563 [2024-07-14 09:44:31.956180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.563 qpair failed and we were unable to recover it. 00:34:47.563 [2024-07-14 09:44:31.956358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.563 [2024-07-14 09:44:31.956385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.563 qpair failed and we were unable to recover it. 00:34:47.563 [2024-07-14 09:44:31.956582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.563 [2024-07-14 09:44:31.956609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.563 qpair failed and we were unable to recover it. 00:34:47.563 [2024-07-14 09:44:31.956822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.563 [2024-07-14 09:44:31.956850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.563 qpair failed and we were unable to recover it. 00:34:47.563 [2024-07-14 09:44:31.957059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.563 [2024-07-14 09:44:31.957085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.563 qpair failed and we were unable to recover it. 00:34:47.563 [2024-07-14 09:44:31.957297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.563 [2024-07-14 09:44:31.957327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.563 qpair failed and we were unable to recover it. 00:34:47.563 [2024-07-14 09:44:31.957535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.563 [2024-07-14 09:44:31.957563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.563 qpair failed and we were unable to recover it. 00:34:47.563 [2024-07-14 09:44:31.957796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.563 [2024-07-14 09:44:31.957822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.563 qpair failed and we were unable to recover it. 00:34:47.563 [2024-07-14 09:44:31.958064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.563 [2024-07-14 09:44:31.958096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.563 qpair failed and we were unable to recover it. 00:34:47.563 [2024-07-14 09:44:31.958268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.563 [2024-07-14 09:44:31.958306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.563 qpair failed and we were unable to recover it. 00:34:47.563 [2024-07-14 09:44:31.958573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.563 [2024-07-14 09:44:31.958614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.563 qpair failed and we were unable to recover it. 00:34:47.563 [2024-07-14 09:44:31.958831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.564 [2024-07-14 09:44:31.958886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.564 qpair failed and we were unable to recover it. 00:34:47.564 [2024-07-14 09:44:31.959070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.564 [2024-07-14 09:44:31.959097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.564 qpair failed and we were unable to recover it. 00:34:47.564 [2024-07-14 09:44:31.959273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.564 [2024-07-14 09:44:31.959301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.564 qpair failed and we were unable to recover it. 00:34:47.564 [2024-07-14 09:44:31.959525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.564 [2024-07-14 09:44:31.959568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.564 qpair failed and we were unable to recover it. 00:34:47.564 [2024-07-14 09:44:31.959812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.564 [2024-07-14 09:44:31.959856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.564 qpair failed and we were unable to recover it. 00:34:47.564 [2024-07-14 09:44:31.960052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.564 [2024-07-14 09:44:31.960084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.564 qpair failed and we were unable to recover it. 00:34:47.564 [2024-07-14 09:44:31.960345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.564 [2024-07-14 09:44:31.960399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.564 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.960639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.960686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.960880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.960907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.961130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.961176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.961406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.961455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.961655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.961685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.961906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.961951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.962258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.962303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.962510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.962539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.962708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.962734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.962952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.962998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.963213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.963256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.963475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.963506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.963693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.963722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.963916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.963944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.964134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.964180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.964373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.964402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.964612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.964655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.964854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.964912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.965103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.965130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.965349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.965379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.965567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.965606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.965835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.965864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.966058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.966112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.966306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.966335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.966563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.966593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.966818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.966845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.967050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.967081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.967299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.967328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.967531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.967562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.967821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.967851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.968050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.968082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.968276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.968303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.968507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.968540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.968767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.968796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.969004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.969031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.969229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.969256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.969441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.969474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.969682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.969718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.969933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.969960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.970169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.970200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.970381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.970410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.970653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.970682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.970877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.970932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.971123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.971149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.971369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.971398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.971582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.971612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.971853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.971884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.972054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.972089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.972299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.972328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.972529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.972572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.972788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.972817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.973015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.973042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.973261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.973289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.833 [2024-07-14 09:44:31.973489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.833 [2024-07-14 09:44:31.973520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.833 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.973734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.973762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.973980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.974008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.974199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.974229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.974444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.974476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.974708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.974737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.974942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.974968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.975135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.975181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.975423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.975449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.975643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.975684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.975913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.975939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.976127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.976153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.976336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.976365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.976580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.976610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.976824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.976856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.977042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.977068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.977262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.977291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.977492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.977521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.977728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.977756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.977971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.977997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.978182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.978208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.978396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.978424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.978626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.978654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.978873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.978899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.979065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.979091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.979308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.979334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.979567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.979596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.979807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.979835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.980031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.980057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.980233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.980259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.980434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.980463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.980666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.980694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.980888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.980914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.981103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.981129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.981347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.981375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.981619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.981647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.981855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.981889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.982099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.982125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.982362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.982388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.982600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.982628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.982846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.982877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.983068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.983093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.983254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.983279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.983493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.983522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.983780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.983833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.984036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.984066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.984313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.984342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.984744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.984797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.985015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.985042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.985239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.985267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.985475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.985503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.985712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.985740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.985992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.986018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.986214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.986240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.986441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.986466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.986687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.986715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.986925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.986952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.987138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.987166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.987377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.987405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.987623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.987648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.987858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.987891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.988083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.988111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.988306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.988332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.988557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.988585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.988793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.988821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.989020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.989046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.989287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.989315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.989551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.989577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.989762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.989788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.990007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.990036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.990257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.990285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.990492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.990517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.990883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.990953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.991173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.991201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.991435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.991461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.991685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.991713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.991901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.991930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.992148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.834 [2024-07-14 09:44:31.992174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.834 qpair failed and we were unable to recover it. 00:34:47.834 [2024-07-14 09:44:31.992395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:31.992423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:31.992660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:31.992688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:31.992900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:31.992926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:31.993154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:31.993182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:31.993394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:31.993419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:31.993630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:31.993655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:31.993847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:31.993882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:31.994133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:31.994161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:31.994357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:31.994383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:31.994620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:31.994648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:31.994836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:31.994863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:31.995096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:31.995122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:31.995344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:31.995369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:31.995600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:31.995625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:31.995834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:31.995858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:31.996101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:31.996129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:31.996311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:31.996338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:31.996525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:31.996551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:31.996764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:31.996792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:31.997044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:31.997073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:31.997286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:31.997311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:31.997523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:31.997552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:31.997764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:31.997792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:31.997997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:31.998025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:31.998248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:31.998276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:31.998511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:31.998539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:31.998748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:31.998773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:31.998997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:31.999026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:31.999211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:31.999239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:31.999470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:31.999495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:31.999683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:31.999710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:31.999926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:31.999955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:32.000175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:32.000200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:32.000412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:32.000439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:32.000633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:32.000658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:32.000849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:32.000885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:32.001129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:32.001157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:32.001338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:32.001367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:32.001577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:32.001602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:32.001817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:32.001845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:32.002095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:32.002123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:32.002340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:32.002365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:32.002607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:32.002635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:32.002837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:32.002872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:32.003068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:32.003093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:32.003310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:32.003338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:32.003545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:32.003573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:32.003745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:32.003770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:32.003985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:32.004013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:32.004228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:32.004253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:32.004441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:32.004466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:32.004701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:32.004729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:32.004913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:32.004952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:32.005180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:32.005206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:32.005427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:32.005456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:32.005638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:32.005666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:32.005884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:32.005911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:32.006106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:32.006132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:32.006328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:32.006356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:32.006543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:32.006569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:32.006810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:32.006838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:32.007066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:32.007091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:32.007288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:32.007313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:32.007540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:32.007581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:32.007766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:32.007794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:32.007993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:32.008019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:32.008207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:32.008233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:32.008394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:32.008419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:32.008613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:32.008637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:32.008801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:32.008825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:32.009052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:32.009081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:32.009293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:32.009319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:32.009522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:32.009551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:32.009759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:32.009787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:32.009982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:32.010013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.835 qpair failed and we were unable to recover it. 00:34:47.835 [2024-07-14 09:44:32.010256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.835 [2024-07-14 09:44:32.010284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.010474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.010507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.010724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.010749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.010968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.010997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.011201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.011229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.011416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.011441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.011675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.011702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.011913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.011942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.012134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.012159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.012371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.012399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.012630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.012656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.012873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.012900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.013128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.013154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.013365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.013393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.013610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.013635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.013822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.013847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.014036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.014062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.014266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.014291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.014503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.014543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.014747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.014774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.014962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.014990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.015157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.015182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.015374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.015401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.015609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.015635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.015825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.015850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.016046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.016070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.016296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.016321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.016494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.016519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.016708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.016738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.016932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.016958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.017146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.017175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.017381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.017409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.017594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.017619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.017833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.017863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.018088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.018115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.018299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.018324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.018562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.018589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.018792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.018821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.019017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.019044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.019210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.019235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.019440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.019467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.019652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.019678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.019850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.019904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.020195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.020223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.020438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.020463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.020674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.020698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.020879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.020941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.021147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.021174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.021391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.021419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.021638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.021666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.021878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.021904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.022129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.022157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.022365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.022393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.022612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.022637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.022885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.022914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.023089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.023117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.023335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.023360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.023556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.023583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.023791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.023815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.024015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.024041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.024291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.024319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.024504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.024534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.024748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.024773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.024938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.024965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.025193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.025218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.025438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.025463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.025689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.025718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.025952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.025986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.026170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.026196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.026410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.026445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.026653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.026680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.026915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.026940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.027189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.027213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.027457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.027484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.027717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.027742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.027939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.027967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.028146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.028175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.028374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.028400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.028590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.836 [2024-07-14 09:44:32.028618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.836 qpair failed and we were unable to recover it. 00:34:47.836 [2024-07-14 09:44:32.028821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.028849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.029086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.029112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.029323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.029352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.029584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.029613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.029832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.029857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.030093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.030121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.030331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.030359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.030572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.030597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.030775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.030804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.030998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.031027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.031235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.031259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.031473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.031500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.031709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.031737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.031950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.031982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.032185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.032210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.032372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.032397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.032581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.032606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.032851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.032893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.033110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.033136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.033328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.033354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.033536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.033560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.033727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.033751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.033977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.034003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.034221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.034249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.034460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.034485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.034701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.034726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.034925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.034951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.035163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.035191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.035374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.035399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.035609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.035637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.035817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.035845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.036101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.036126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.036338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.036366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.036548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.036577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.036795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.036820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.037020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.037046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.037240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.037268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.037462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.037487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.037708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.037735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.037927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.037954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.038141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.038165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.038383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.038411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.038619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.038647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.038849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.038881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.039095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.039123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.039306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.039334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.039546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.039571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.039763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.039787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.039980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.040005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.040192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.040218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.040434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.040462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.040698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.040726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.040948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.040974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.041228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.041256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.041463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.041491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.041675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.041700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.041937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.041966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.042155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.042180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.042396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.042425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.042639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.042668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.042902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.042931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.043126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.043152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.043358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.043386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.043567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.043594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.043799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.043824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.044011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.044037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.044256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.044284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.044495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.044520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.044698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.044726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.044937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.837 [2024-07-14 09:44:32.044966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.837 qpair failed and we were unable to recover it. 00:34:47.837 [2024-07-14 09:44:32.045196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.045221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.045454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.045482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.045692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.045719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.045925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.045950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.046150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.046175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.046355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.046380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.046544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.046569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.046808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.046836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.047057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.047086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.047273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.047300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.047537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.047565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.047782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.047811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.048042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.048067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.048278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.048305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.048526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.048553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.048795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.048823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.049063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.049088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.049286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.049311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.049479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.049504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.049753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.049781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.049993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.050022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.050232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.050258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.050445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.050474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.050660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.050687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.050906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.050932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.051125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.051152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.051384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.051412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.051599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.051624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.051809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.051836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.052089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.052115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.052304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.052329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.052515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.052543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.052756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.052781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.052973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.053000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.053184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.053212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.053417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.053442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.053657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.053682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.053875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.053904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.054115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.054143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.054354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.054379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.054592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.054620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.054799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.054827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.055053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.055079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.055269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.055297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.055511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.055539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.055780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.055805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.055997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.056025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.056237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.056265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.056438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.056463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.056697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.056724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.056938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.056963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.057153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.057178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.057342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.057368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.057555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.057583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.057770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.057794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.057988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.058013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.058258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.058291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.058506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.058531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.058721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.058749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.058937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.058965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.059147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.059172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.059383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.059410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.059617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.059645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.059846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.059886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.060103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.060131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.060349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.060374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.060566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.060590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.060779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.060804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.060995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.061021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.061249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.061275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.061496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.061525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.061760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.061788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.062022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.062048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.062269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.062297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.062509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.062537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.062729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.062755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.062977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.063018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.063197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.063224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.063414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.063439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.063643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.063668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.063898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.063927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.064145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.064171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.064408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.064436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.064679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.838 [2024-07-14 09:44:32.064704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.838 qpair failed and we were unable to recover it. 00:34:47.838 [2024-07-14 09:44:32.064898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.064924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.065110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.065136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.065309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.065334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.065497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.065523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.065742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.065770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.066022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.066048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.066241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.066267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.066509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.066537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.066704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.066733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.066938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.066963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.067200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.067228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.067438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.067465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.067676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.067700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.067922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.067952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.068165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.068194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.068378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.068402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.068589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.068619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.068858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.068894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.069102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.069128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.069298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.069324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.069538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.069578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.069797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.069822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.070024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.070049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.070274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.070302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.070488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.070512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.070726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.070755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.070939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.070967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.071180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.071205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.071440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.071468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.071678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.071706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.071916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.071942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.072163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.072191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.072396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.072424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.072646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.072671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.072853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.072888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.073101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.073129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.073311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.073336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.073505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.073531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.073723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.073748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.073917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.073943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.074109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.074141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.074306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.074331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.074519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.074544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.074729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.074759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.074997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.075026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.075219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.075244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.075422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.075451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.075656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.075684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.075923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.075949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.076171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.076199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.076438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.076466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.076672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.076698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.076917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.076946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.077160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.077187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.077374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.077399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.077639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.077668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.077885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.077911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.078101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.078127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.078346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.078374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.078583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.078611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.078821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.078847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.079099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.079127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.079375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.079403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.079611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.079637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.079851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.079887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.080086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.080111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.080306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.080332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.080544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.080571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.080807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.080835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.081058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.081084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.081296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.081336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.081547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.081576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.081788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.081813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.081987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.082012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.082255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.082284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.839 [2024-07-14 09:44:32.082497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.839 [2024-07-14 09:44:32.082522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.839 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.082741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.082769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.082978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.083007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.083218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.083243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.083433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.083461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.083714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.083739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.083957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.083987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.084209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.084235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.084397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.084422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.084635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.084660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.084903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.084933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.085139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.085167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.085384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.085409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.085617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.085645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.085872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.085901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.086138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.086164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.086351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.086379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.086574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.086599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.086820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.086845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.087046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.087072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.087296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.087324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.087533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.087558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.087750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.087774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.088014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.088042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.088256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.088281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.088536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.088564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.088750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.088777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.088990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.089016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.089253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.089281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.089483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.089511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.089751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.089776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.089984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.090012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.090244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.090272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.090509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.090538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.090734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.090762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.090976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.091005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.091196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.091222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.091436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.091465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.091674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.091702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.091915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.091940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.092102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.092127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.092319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.092345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.092551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.092576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.092740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.092765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.093006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.093035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.093218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.093244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.093456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.093484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.093681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.093706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.093896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.093921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.094103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.094132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.094330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.094359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.094549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.094574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.094765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.094791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.095029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.095058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.095301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.095327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.095544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.095572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.095782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.095810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.095996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.096021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.096216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.096241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.096486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.096513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.096729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.096754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.096946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.096972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.097195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.097223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.097461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.097486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.097695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.097723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.097947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.097976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.098149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.098175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.098386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.098414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.098619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.098646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.098964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.098991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.099235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.099264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.099473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.099501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.099735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.099760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.099978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.100007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.100203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.100232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.100421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.100446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.100661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.100688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.100922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.100948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.101137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.101161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.101408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.101433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.840 qpair failed and we were unable to recover it. 00:34:47.840 [2024-07-14 09:44:32.101653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.840 [2024-07-14 09:44:32.101678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.101932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.101958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.102173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.102202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.102384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.102412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.102624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.102649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.102889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.102919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.103133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.103158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.103373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.103399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.103586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.103614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.103844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.103879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.104096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.104121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.104331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.104359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.104577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.104605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.104826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.104851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.105072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.105100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.105337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.105365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.105548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.105573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.105786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.105813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.106022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.106050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.106283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.106307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.106510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.106535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.106751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.106784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.107032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.107058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.107276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.107304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.107536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.107564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.107773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.107799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.108021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.108049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.108277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.108305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.108513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.108538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.108742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.108770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.108954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.108984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.109166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.109191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.109404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.109432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.109633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.109660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.109899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.109925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.110115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.110144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.110395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.110421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.110584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.110609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.110764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.110789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.111013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.111042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.111254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.111280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.111469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.111496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.111699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.111726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.111941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.111966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.112183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.112211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.112440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.112468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.112671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.112696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.112911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.112940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.113119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.113148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.113340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.113367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.113574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.113601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.113839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.113875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.114060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.114085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.114322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.114349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.114566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.114590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.114761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.114786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.115028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.115057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.115289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.115314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.115501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.115526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.115719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.115744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.115944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.115972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.116184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.116210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.116436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.116468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.116684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.116709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.116898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.116923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.117137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.117164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.117413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.117438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.117661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.117686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.117909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.117938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.118116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.118143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.118368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.118394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.118607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.118635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.118842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.118877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.119074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.119100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.119255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.119279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.119492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.119520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.119759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.119785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.120030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.120059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.120269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.120299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.120485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.120509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.120706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.120731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.120934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.120963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.841 [2024-07-14 09:44:32.121148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.841 [2024-07-14 09:44:32.121174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.841 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.121342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.121367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.121582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.121607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.121796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.121820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.122014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.122039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.122231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.122255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.122474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.122498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.122678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.122714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.122903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.122929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.123086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.123109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.123272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.123295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.123476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.123501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.123690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.123715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.123888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.123923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.124118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.124143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.124353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.124378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.124588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.124614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.124806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.124831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.125023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.125049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.125235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.125260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.125425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.125450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.125634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.125679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.125898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.125928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.126119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.126148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.126366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.126394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.126593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.126642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.126826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.126852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.127056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.127083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.127248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.127274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.127466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.127491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.127823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.127880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.128064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.128090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.128257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.128284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.128550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.128601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.128802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.128830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.129051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.129077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.129270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.129296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.129481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.129509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.129747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.129774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.129989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.130016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.130180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.130220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.130432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.130459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.130636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.130663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.130886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.130929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.131091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.131116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.131300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.131326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.131541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.131565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.131726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.131750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.131956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.131986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.132153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.132178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.132339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.132364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.132708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.132759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.132974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.132999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.133158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.133183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.133453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.133505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.133706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.133734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.133957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.133983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.134171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.134196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.134357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.134382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.134598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.134624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.134831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.134859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.135053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.135078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.135270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.135295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.135658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.135707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.135928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.135953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.136157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.136185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.136453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.136499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.136711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.136739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.136956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.136981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.137147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.137172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.137341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.137366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.137558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.137582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.137778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.137807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.138047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.138074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.138264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.138289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.138539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.138588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.842 [2024-07-14 09:44:32.138783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.842 [2024-07-14 09:44:32.138811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.842 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.139030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.139056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.139238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.139263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.139456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.139481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.139670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.139700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.139939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.139964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.140124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.140148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.140308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.140333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.140601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.140652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.140862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.140912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.141084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.141109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.141295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.141320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.141530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.141556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.141750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.141775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.141966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.141991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.142160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.142203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.142409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.142438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.142838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.142899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.143077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.143102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.143290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.143315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.143530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.143571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.143759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.143787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.144004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.144029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.144221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.144246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.144414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.144455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.144659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.144687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.144895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.144920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.145137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.145162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.145352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.145377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.145568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.145595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.145778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.145806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.146019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.146045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.146237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.146262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.146478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.146503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.146673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.146698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.146888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.146914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.147127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.147153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.147402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.147427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.147591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.147616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.147834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.147860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.148062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.148095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.148275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.148299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.148495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.148520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.148679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.148704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.148878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.148904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.149127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.149152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.149320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.149345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.149537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.149561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.149750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.149775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.149964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.149990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.150179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.150205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.150392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.150417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.150608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.150632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.150822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.150847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.151022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.151048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.151236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.151261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.151419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.151443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.151632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.151656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.151821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.151846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.152115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.152163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.152393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.152421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.152656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.152711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.152947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.152983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.153189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.153218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.153440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.153486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.153673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.153717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.153885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.153913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.154110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.154143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.154340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.154393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.154605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.154648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.154884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.154914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.155120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.155149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.155370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.155415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.155602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.155659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.155888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.155916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.156146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.156174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.156364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.156408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.156679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.156728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.156946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.156974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.157222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.157275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.157553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.157603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.157802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.157829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.158035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.843 [2024-07-14 09:44:32.158081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:47.843 qpair failed and we were unable to recover it. 00:34:47.843 [2024-07-14 09:44:32.158321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.158364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.158608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.158638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.158818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.158846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.159119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.159147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.159362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.159390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.159627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.159655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.159861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.159894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.160087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.160112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.160298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.160326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.160506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.160535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.160740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.160769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.160958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.160988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.161177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.161203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.161367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.161392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.161574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.161602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.161814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.161838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.162062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.162087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.162299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.162327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.162611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.162661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.162909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.162935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.163154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.163180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.163370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.163395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.163582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.163606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.163804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.163829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.164006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.164032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.164249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.164275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.164440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.164466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.164725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.164773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.164967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.164993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.165185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.165210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.165376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.165400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.165583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.165611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.165799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.165824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.165996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.166022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.166210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.166235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.166389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.166413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.166667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.166715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.166937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.166963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.167151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.167180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.167423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.167469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.167668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.167696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.167883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.167912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.168115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.168141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.168322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.168350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.168554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.168582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.168812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.168840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.169025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.169051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.169219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.169244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.169481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.169528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.169759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.169786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.169976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.170002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.170214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.170239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.170427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.170468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.170684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.170731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.170944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.170970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.171152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.171177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.171341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.171366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.171556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.171584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.171811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.171839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.172029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.172055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.172223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.172248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.172459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.172487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.172722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.172751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.172951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.172977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.173161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.173186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.173346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.173386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.173742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.173789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.174008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.174034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.174198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.174223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.174499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.174546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.174756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.174784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.175005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.175031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.175218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.175243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.175420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.175445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.175656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.175681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.175875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.175902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.844 [2024-07-14 09:44:32.176090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.844 [2024-07-14 09:44:32.176116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.844 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.176277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.176302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.176462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.176487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.176649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.176680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.176933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.176961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.177151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.177176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.177372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.177400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.177601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.177626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.177834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.177862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.178057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.178082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.178267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.178292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.178480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.178505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.178672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.178698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.178862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.178896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.179112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.179138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.179294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.179319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.179536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.179561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.179754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.179779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.179962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.179988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.180155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.180181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.180372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.180397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.180584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.180610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.180770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.180795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.180963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.180989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.181154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.181180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.181349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.181374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.181534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.181559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.181746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.181772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.181976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.182002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.182165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.182192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.182357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.182386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.182599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.182624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.182836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.182861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.183095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.183121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.183328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.183353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.183540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.183565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.183724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.183749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.183936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.183962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.184153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.184178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.184392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.184417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.184630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.184656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.184842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.184872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.185063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.185088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.185284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.185310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.185475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.185502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.185715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.185741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.185895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.185922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.186088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.186114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.186278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.186303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.186516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.186541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.186702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.186727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.186917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.186943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.187101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.187126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.187284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.187309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.187501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.187526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.187687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.187712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.187876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.187902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.188085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.188111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.188336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.188364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.188571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.188596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.188806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.188832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.189000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.189026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.189220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.189245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.189397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.189422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.189604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.189629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.189820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.189845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.190038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.190063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.190250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.190275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.190429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.190455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.190668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.190693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.190853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.190896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.191089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.191118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.191359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.191388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.191570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.191598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.191805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.191830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.192028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.192054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.192246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.192272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.192485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.192510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.192700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.192725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.192913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.192940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.193108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.193133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.193324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.193350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.193549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.193574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.193791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.193816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.193978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.194005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.194174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.194199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.194382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.194408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.194571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.194598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.194824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.845 [2024-07-14 09:44:32.194850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.845 qpair failed and we were unable to recover it. 00:34:47.845 [2024-07-14 09:44:32.195033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.195059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.195249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.195274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.195462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.195487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.195676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.195701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.195920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.195946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.196115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.196142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.196319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.196343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.196545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.196570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.196756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.196781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.196964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.196995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.197184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.197209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.197393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.197418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.197606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.197631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.197847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.197882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.198041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.198066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.198239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.198265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.198455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.198480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.198639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.198665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.198850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.198892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.199082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.199107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.199261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.199287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.199472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.199497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.199689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.199714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.199929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.199955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.200172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.200198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.200361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.200386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.200567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.200592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.200760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.200785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.200966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.200992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.201185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.201211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.201372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.201397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.201578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.201604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.201757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.201782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.201941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.201967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.202151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.202176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.202368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.202394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.202558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.202584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.202805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.202830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.203023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.203048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.203211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.203236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.203395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.203419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.203605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.203630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.203785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.203810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.203994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.204020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.204211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.204235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.204393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.204418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.204617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.204642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.204834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.204858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.205036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.205062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.205217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.205242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.205436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.205465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.205623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.205648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.205837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.205862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.206055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.206081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.206234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.206259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.206423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.206448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.206641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.206666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.206852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.206895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.207112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.207137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.207327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.207352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.207506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.207531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.207743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.207769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.207970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.207998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.208231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.208256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.208445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.208472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.208696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.208723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.208931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.208957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.209169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.209196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.209380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.209408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.209631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.209655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.209878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.209907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.210151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.210179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.210408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.210433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.210588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.210612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.210779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.210804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.211018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.211044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.846 qpair failed and we were unable to recover it. 00:34:47.846 [2024-07-14 09:44:32.211300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.846 [2024-07-14 09:44:32.211328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.211511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.211539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.211738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.211763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.211948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.211976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.212144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.212170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.212387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.212413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.212588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.212616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.212823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.212852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.213052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.213077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.213287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.213312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.213563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.213590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.213804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.213829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.214099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.214125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.214317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.214343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.214533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.214558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.214737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.214766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.214964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.214990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.215150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.215176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.215365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.215391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.215549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.215576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.215756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.215782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.215970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.215999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.216204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.216233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.216441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.216467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.216696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.216724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.216900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.216928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.217132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.217158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.217342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.217369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.217546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.217573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.217766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.217791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.217978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.218005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.218229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.218257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.218486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.218511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.218748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.218791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.219027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.219056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.219247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.219273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.219493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.219533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.219730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.219772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.219986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.220018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.220283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.220310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.220509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.220537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.220721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.220748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.220923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.220959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.221216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.221254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.221457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.221485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.221674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.221709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.221952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.221993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.222195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.222230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.222425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.222459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.222660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.222688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.222878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.222905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.223096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.223126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.223352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.223379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.223547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.223574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.223788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.223831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.224061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.224091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.224311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.224338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.224572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.224601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.224812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.224840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.225046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.225073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.225285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.225318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.225533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.225568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.225762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.225788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.225961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.225988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.226203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.226245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.226479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.226511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.226734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.226771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.226973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.226999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.227212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.227238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.227429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.227458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.227673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.227703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.227888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.227923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.228114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.228140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.228318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.228345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.228583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.228607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.228821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.228849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.229061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.229091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.229283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.229308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.229555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.229582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.229768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.229795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.229990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.230016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.230251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.230279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.230512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.847 [2024-07-14 09:44:32.230540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.847 qpair failed and we were unable to recover it. 00:34:47.847 [2024-07-14 09:44:32.230754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.230779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.230989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.231019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.231271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.231297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.231492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.231518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.231733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.231761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.231977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.232003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.232169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.232196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.232386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.232411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.232576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.232601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.232752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.232777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.232939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.232964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.233184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.233209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.233401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.233426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.233642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.233683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.233915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.233943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.234158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.234183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.234350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.234375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.234562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.234590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.234824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.234849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.235117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.235146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.235335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.235360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.235544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.235569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.235814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.235842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.236061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.236089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.236277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.236302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.236514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.236542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.236755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.236782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.236959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.236990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.237170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.237213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.237447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.237475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.237716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.237741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.237932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.237961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.238150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.238178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.238379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.238404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.238640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.238666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.238832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.238859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.239060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.239085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.239296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.239324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.239569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.239596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.239810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.239835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.240037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.240063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.240226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.240251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.240439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.240464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.240681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.240705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.240915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.240943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.241145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.241170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.241351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.241376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.241615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.241642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.241852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.241885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.242096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.242124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.242358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.242383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.242607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.242632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.242852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.242898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.243093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.243120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.243359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.243385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.243608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.243633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.243844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.243880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.244093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.244118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.244332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.244360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.244581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.244608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.244797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.244822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.245018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.245044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.245230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.245258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.245435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.245460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.245711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.245739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.245979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.246005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.246166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.246192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.246450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.246478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.246673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.246701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.246891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.246917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.247098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.247127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.247367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.247395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.247616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.848 [2024-07-14 09:44:32.247641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.848 qpair failed and we were unable to recover it. 00:34:47.848 [2024-07-14 09:44:32.247852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.247887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.248102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.248130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.248333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.248358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.248541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.248568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.248740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.248767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.248971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.248997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.249219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.249246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.249462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.249490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.249701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.249726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.249940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.249969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.250189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.250214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.250376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.250402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.250638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.250665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.250889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.250917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.251108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.251135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.251372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.251400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.251619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.251647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.251882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.251908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.252073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.252098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.252289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.252314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.252575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.252600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.252877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.252905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.253115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.253144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.253333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.253359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.253567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.253594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.253802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.253829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.254084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.254110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.254340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.254369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.254606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.254634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.254881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.254907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.255157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.255184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.255403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.255428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.255620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.255645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.255822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.255850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.256074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.256100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.256265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.256291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.256485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.256510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.256734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.256762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.256970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.256997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.257188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.257214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.257444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.257472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.257708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.257733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.257949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.257977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.258185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.258213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.258453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.258478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.258721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.258749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.258931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.258960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.259185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.259211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.259434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.259461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.259659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.259684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.259875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.259901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.260117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.260146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.260356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.260384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.260572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.260597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.260789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.260814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.261075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.261104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.261307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.261333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.261555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.261582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.261822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.261850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.262071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.262096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.262287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.262312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.262548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.262577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.262794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.262819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.262983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.263013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.263186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.263211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.263368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.263394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.263634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.263662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.263893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.263925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.264160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.264187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.264395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.264424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.264659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.264695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.264953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.264981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.265217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.265252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.265521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.265559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.265759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.265794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.266011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.266044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.266264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.266295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.266480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.266506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.266745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.266772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.266954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.266983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.267193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.267219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.267400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.267426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.267652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.267680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.849 qpair failed and we were unable to recover it. 00:34:47.849 [2024-07-14 09:44:32.267893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.849 [2024-07-14 09:44:32.267925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.850 qpair failed and we were unable to recover it. 00:34:47.850 [2024-07-14 09:44:32.268120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.850 [2024-07-14 09:44:32.268146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.850 qpair failed and we were unable to recover it. 00:34:47.850 [2024-07-14 09:44:32.268339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.850 [2024-07-14 09:44:32.268367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.850 qpair failed and we were unable to recover it. 00:34:47.850 [2024-07-14 09:44:32.268610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.850 [2024-07-14 09:44:32.268635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.850 qpair failed and we were unable to recover it. 00:34:47.850 [2024-07-14 09:44:32.268846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.850 [2024-07-14 09:44:32.268883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.850 qpair failed and we were unable to recover it. 00:34:47.850 [2024-07-14 09:44:32.269080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.850 [2024-07-14 09:44:32.269109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.850 qpair failed and we were unable to recover it. 00:34:47.850 [2024-07-14 09:44:32.269320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.850 [2024-07-14 09:44:32.269346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.850 qpair failed and we were unable to recover it. 00:34:47.850 [2024-07-14 09:44:32.269524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.850 [2024-07-14 09:44:32.269558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.850 qpair failed and we were unable to recover it. 00:34:47.850 [2024-07-14 09:44:32.269810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.850 [2024-07-14 09:44:32.269838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.850 qpair failed and we were unable to recover it. 00:34:47.850 [2024-07-14 09:44:32.270044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.850 [2024-07-14 09:44:32.270070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.850 qpair failed and we were unable to recover it. 00:34:47.850 [2024-07-14 09:44:32.270312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.850 [2024-07-14 09:44:32.270340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.850 qpair failed and we were unable to recover it. 00:34:47.850 [2024-07-14 09:44:32.270554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.850 [2024-07-14 09:44:32.270582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.850 qpair failed and we were unable to recover it. 00:34:47.850 [2024-07-14 09:44:32.270768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.850 [2024-07-14 09:44:32.270793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.850 qpair failed and we were unable to recover it. 00:34:47.850 [2024-07-14 09:44:32.271023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.850 [2024-07-14 09:44:32.271052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.850 qpair failed and we were unable to recover it. 00:34:47.850 [2024-07-14 09:44:32.271253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.850 [2024-07-14 09:44:32.271280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.850 qpair failed and we were unable to recover it. 00:34:47.850 [2024-07-14 09:44:32.271482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.850 [2024-07-14 09:44:32.271507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.850 qpair failed and we were unable to recover it. 00:34:47.850 [2024-07-14 09:44:32.271722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.850 [2024-07-14 09:44:32.271750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.850 qpair failed and we were unable to recover it. 00:34:47.850 [2024-07-14 09:44:32.271999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.850 [2024-07-14 09:44:32.272029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.850 qpair failed and we were unable to recover it. 00:34:47.850 [2024-07-14 09:44:32.272249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.850 [2024-07-14 09:44:32.272286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.850 qpair failed and we were unable to recover it. 00:34:47.850 [2024-07-14 09:44:32.272506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.850 [2024-07-14 09:44:32.272533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.850 qpair failed and we were unable to recover it. 00:34:47.850 [2024-07-14 09:44:32.272701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.850 [2024-07-14 09:44:32.272726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.850 qpair failed and we were unable to recover it. 00:34:47.850 [2024-07-14 09:44:32.272926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.850 [2024-07-14 09:44:32.272953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.850 qpair failed and we were unable to recover it. 00:34:47.850 [2024-07-14 09:44:32.273175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.850 [2024-07-14 09:44:32.273203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.850 qpair failed and we were unable to recover it. 00:34:47.850 [2024-07-14 09:44:32.273427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.850 [2024-07-14 09:44:32.273469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.850 qpair failed and we were unable to recover it. 00:34:47.850 [2024-07-14 09:44:32.273700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.850 [2024-07-14 09:44:32.273726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.850 qpair failed and we were unable to recover it. 00:34:47.850 [2024-07-14 09:44:32.273896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.850 [2024-07-14 09:44:32.273923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.850 qpair failed and we were unable to recover it. 00:34:47.850 [2024-07-14 09:44:32.274137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.850 [2024-07-14 09:44:32.274178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.850 qpair failed and we were unable to recover it. 00:34:47.850 [2024-07-14 09:44:32.274381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.850 [2024-07-14 09:44:32.274407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.850 qpair failed and we were unable to recover it. 00:34:47.850 [2024-07-14 09:44:32.274598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.850 [2024-07-14 09:44:32.274638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.850 qpair failed and we were unable to recover it. 00:34:47.850 [2024-07-14 09:44:32.274860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.850 [2024-07-14 09:44:32.274897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:47.850 qpair failed and we were unable to recover it. 00:34:48.117 [2024-07-14 09:44:32.275093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.117 [2024-07-14 09:44:32.275118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.117 qpair failed and we were unable to recover it. 00:34:48.117 [2024-07-14 09:44:32.275309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.117 [2024-07-14 09:44:32.275334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.117 qpair failed and we were unable to recover it. 00:34:48.117 [2024-07-14 09:44:32.275505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.117 [2024-07-14 09:44:32.275530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.117 qpair failed and we were unable to recover it. 00:34:48.117 [2024-07-14 09:44:32.275685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.117 [2024-07-14 09:44:32.275710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.117 qpair failed and we were unable to recover it. 00:34:48.117 [2024-07-14 09:44:32.275901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.117 [2024-07-14 09:44:32.275927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.117 qpair failed and we were unable to recover it. 00:34:48.117 [2024-07-14 09:44:32.276136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.117 [2024-07-14 09:44:32.276161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.117 qpair failed and we were unable to recover it. 00:34:48.117 [2024-07-14 09:44:32.276353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.117 [2024-07-14 09:44:32.276379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.117 qpair failed and we were unable to recover it. 00:34:48.117 [2024-07-14 09:44:32.276686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.117 [2024-07-14 09:44:32.276724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.117 qpair failed and we were unable to recover it. 00:34:48.117 [2024-07-14 09:44:32.276920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.117 [2024-07-14 09:44:32.276947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.117 qpair failed and we were unable to recover it. 00:34:48.117 [2024-07-14 09:44:32.277175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.117 [2024-07-14 09:44:32.277202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.117 qpair failed and we were unable to recover it. 00:34:48.117 [2024-07-14 09:44:32.277451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.117 [2024-07-14 09:44:32.277486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.117 qpair failed and we were unable to recover it. 00:34:48.117 [2024-07-14 09:44:32.277734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.117 [2024-07-14 09:44:32.277763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.117 qpair failed and we were unable to recover it. 00:34:48.117 [2024-07-14 09:44:32.277951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.117 [2024-07-14 09:44:32.277987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.117 qpair failed and we were unable to recover it. 00:34:48.117 [2024-07-14 09:44:32.278205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.117 [2024-07-14 09:44:32.278234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.117 qpair failed and we were unable to recover it. 00:34:48.117 [2024-07-14 09:44:32.278440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.117 [2024-07-14 09:44:32.278469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.117 qpair failed and we were unable to recover it. 00:34:48.117 [2024-07-14 09:44:32.278676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.117 [2024-07-14 09:44:32.278703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.117 qpair failed and we were unable to recover it. 00:34:48.117 [2024-07-14 09:44:32.278900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.117 [2024-07-14 09:44:32.278944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.117 qpair failed and we were unable to recover it. 00:34:48.117 [2024-07-14 09:44:32.279174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.117 [2024-07-14 09:44:32.279200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.117 qpair failed and we were unable to recover it. 00:34:48.117 [2024-07-14 09:44:32.279418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.117 [2024-07-14 09:44:32.279461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.117 qpair failed and we were unable to recover it. 00:34:48.117 [2024-07-14 09:44:32.279678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.117 [2024-07-14 09:44:32.279706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.117 qpair failed and we were unable to recover it. 00:34:48.117 [2024-07-14 09:44:32.279896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.117 [2024-07-14 09:44:32.279926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.117 qpair failed and we were unable to recover it. 00:34:48.117 [2024-07-14 09:44:32.280142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.117 [2024-07-14 09:44:32.280167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.117 qpair failed and we were unable to recover it. 00:34:48.117 [2024-07-14 09:44:32.280379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.117 [2024-07-14 09:44:32.280420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.117 qpair failed and we were unable to recover it. 00:34:48.117 [2024-07-14 09:44:32.280637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.117 [2024-07-14 09:44:32.280664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.117 qpair failed and we were unable to recover it. 00:34:48.117 [2024-07-14 09:44:32.280877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.117 [2024-07-14 09:44:32.280903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.117 qpair failed and we were unable to recover it. 00:34:48.117 [2024-07-14 09:44:32.281134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.117 [2024-07-14 09:44:32.281163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.117 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.281356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.281381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.281547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.281572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.281756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.281784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.282027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.282055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.282277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.282302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.282519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.282549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.282767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.282795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.283009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.283036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.283273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.283302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.283491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.283519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.283709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.283734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.283924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.283951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.284170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.284197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.284403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.284427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.284685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.284713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.284933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.284959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.285159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.285184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.285401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.285429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.285634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.285662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.285879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.285909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.286129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.286171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.286381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.286409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.286623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.286649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.286895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.286922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.287089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.287115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.287311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.287336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.287581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.287610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.287877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.287905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.288143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.288168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.288416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.288444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.288656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.288684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.288876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.288902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.289068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.289094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.289305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.289331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.289516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.289541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.289783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.289811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.290058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.290087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.290337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.290363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.290619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.290648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.290880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.290909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.291090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.291115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.291317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.291345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.291566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.291594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.291803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.291828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.292033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.292059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.292273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.292301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.292517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.292542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.292784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.292812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.293006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.293035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.293247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.293273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.293474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.293500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.293668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.293695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.293899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.293926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.294126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.294154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.294348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.294373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.294585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.294610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.294844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.294887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.295128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.295157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.295392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.295418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.295591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.295615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.295827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.295858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.296087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.296113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.296319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.296346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.296536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.296564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.296779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.296804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.296979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.297008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.297229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.297256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.297440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.297465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.297670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.297698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.297942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.297971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.298202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.298227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.118 qpair failed and we were unable to recover it. 00:34:48.118 [2024-07-14 09:44:32.298449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.118 [2024-07-14 09:44:32.298475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.298650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.298678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.298876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.298903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.299102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.299127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.299303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.299331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.299545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.299570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.299761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.299786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.300005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.300033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.300257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.300283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.300493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.300520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.300766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.300794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.301034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.301060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.301268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.301295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.301506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.301531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.301717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.301742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.301934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.301961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.302193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.302220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.302402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.302427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.302608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.302636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.302893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.302923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.303140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.303165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.303378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.303420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.303666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.303695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.303879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.303904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.304145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.304180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.304362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.304417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.304658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.304684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.304860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.304891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.305095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.305124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.305333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.305359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.305527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.305553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.305792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.305820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.306013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.306039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.306204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.306229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.306466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.306492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.306677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.306702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.306940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.306969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.307148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.307176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.307414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.307439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.307628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.307656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.307845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.307880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.308100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.308125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.308353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.308381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.308591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.308616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.308811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.308836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.309008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.309034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.309248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.309277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.309460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.309485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.309694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.309721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.309900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.309928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.310138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.310164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.310403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.310432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.310621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.310648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.310861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.310902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.311094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.311119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.311338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.311380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.311598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.311623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.311838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.311879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.312105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.312133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.312348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.312373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.312575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.312601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.312787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.312814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.313023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.313049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.313255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.313283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.313537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.313562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.313784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.313809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.314060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.314086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.314272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.314299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.314485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.314510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.314688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.314731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.314944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.314973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.315192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.315218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.315422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.315449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.315673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.315700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.119 qpair failed and we were unable to recover it. 00:34:48.119 [2024-07-14 09:44:32.315879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.119 [2024-07-14 09:44:32.315905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.316157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.316184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.316395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.316420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.316590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.316615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.316827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.316855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.317098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.317126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.317335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.317360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.317547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.317574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.317744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.317771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.317967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.317992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.318177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.318204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.318429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.318457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.318649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.318676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.318919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.318948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.319182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.319210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.319424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.319449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.319638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.319663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.319884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.319913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.320098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.320123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.320323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.320350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.320548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.320575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.320785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.320810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.320984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.321010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.321178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.321203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.321388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.321418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.321575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.321601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.321769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.321797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.322033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.322060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.322282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.322310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.322495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.322523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.322761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.322786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.323004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.323033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.323271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.323298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.323486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.323511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.323928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.323956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.324216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.324241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.324432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.324457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.324649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.324675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.324926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.324952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.325143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.325169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.325381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.325410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.325626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.325651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.325833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.325859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.326085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.326113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.326329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.326357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.326567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.326593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.326783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.326810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.327055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.327081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.327239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.327264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.327454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.327480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.327640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.327665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.327832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.327879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.328095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.328123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.328369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.328396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.328606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.328631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.328801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.328826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.329020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.329045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.329239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.329265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.329500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.329527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.329737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.329765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.330003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.330029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.330203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.330229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.330439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.330466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.330652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.330677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.330884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.330910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.331123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.331149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.331362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.331387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.331567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.331597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.331839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.331872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.332080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.332105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.332287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.332315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.332530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.332556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.332725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.332750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.332959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.332998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.333236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.333264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.333472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.333497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.333714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.333742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.333964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.333992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.334196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.334221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.334405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.334433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.120 [2024-07-14 09:44:32.334679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.120 [2024-07-14 09:44:32.334706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.120 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.334940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.334966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.335210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.335239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.335429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.335456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.335665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.335690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.335932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.335960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.336168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.336196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.336409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.336434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.336642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.336669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.336850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.336881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.337079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.337105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.337330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.337357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.337599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.337632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.337858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.337890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.338113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.338141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.338390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.338417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.338601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.338626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.338793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.338820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.339028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.339058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.339243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.339268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.339517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.339545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.339751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.339779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.339972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.339998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.340231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.340260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.340504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.340531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.340746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.340771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.340945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.340972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.341158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.341183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.341398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.341424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.341600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.341627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.341848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.341881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.342075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.342100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.342285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.342312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.342492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.342521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.342737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.342763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.342957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.342986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.343165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.343193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.343388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.343414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.343601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.343628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.343875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.343908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.344133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.344159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.344355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.344384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.344586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.344614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.344795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.344820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.345004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.345030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.345219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.345247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.345455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.345480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.345651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.345676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.345846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.345880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.346070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.346096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.346259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.346284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.346515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.346542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.346750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.346777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.346994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.347024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.347262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.347290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.347481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.347506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.347743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.347770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.347972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.348001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.348211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.348237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.348414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.348441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.348660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.348688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.348876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.348901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.349078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.349106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.349323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.349348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.349509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.349534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.349736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.349763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.349969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.349998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.350218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.350244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.350430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.350457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.350659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.350687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.350898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.350924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.351132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.351160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.351332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.351360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.351572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.351597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.351836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.351864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.352115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.352141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.352351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.352376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.352609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.352637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.352871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.352896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.353113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.353139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.353365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.353394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.353604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.353631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.121 [2024-07-14 09:44:32.353826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.121 [2024-07-14 09:44:32.353852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.121 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.354044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.354072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.354320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.354348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.354539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.354564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.354781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.354810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.355010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.355039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.355225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.355251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.355479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.355508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.355741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.355770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.355985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.356011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.356200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.356229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.356438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.356466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.356679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.356704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.356915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.356943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.357152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.357180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.357415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.357441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.357684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.357712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.357932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.357959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.358143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.358168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.358408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.358436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.358651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.358679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.358891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.358917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.359156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.359184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.359399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.359428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.359633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.359659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.359838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.359876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.360062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.360091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.360303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.360328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.360513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.360541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.360712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.360740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.360979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.361006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.361206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.361231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.361387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.361412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.361626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.361651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.361900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.361928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.362173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.362201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.362381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.362406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.362582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.362611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.362813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.362841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.363044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.363070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.363258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.363286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.363493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.363522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.363726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.363752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.363987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.364016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.364201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.364230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.364440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.364466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.364635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.364660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.364827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.364852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.365071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.365096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.365298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.365323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.365535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.365563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.365798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.365823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.366022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.366048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.366247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.366273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.366503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.366528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.366742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.366770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.367001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.367028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.367219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.367244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.367459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.367487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.367673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.367700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.367891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.367918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.368133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.368174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.368356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.368384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.368596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.368621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.368838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.368872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.369086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.369114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.369333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.369362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.369607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.369635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.369845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.369875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.370070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.370095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.370284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.370312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.370487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.370514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.370710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.370735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.370920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.370946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.371135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.371161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.371323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.371348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.371511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.371536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.371726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.371751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.371937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.122 [2024-07-14 09:44:32.371964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.122 qpair failed and we were unable to recover it. 00:34:48.122 [2024-07-14 09:44:32.372150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.372175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.372369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.372395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.372585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.372610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.372800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.372825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.372993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.373021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.373179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.373204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.373398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.373423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.373586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.373611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.373796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.373821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.373986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.374012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.374178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.374205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.374391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.374416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.374606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.374631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.374818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.374843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.375048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.375074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.375267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.375292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.375458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.375484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.375699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.375724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.375885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.375912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.376102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.376127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.376313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.376339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.376518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.376543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.376733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.376758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.376919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.376945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.377134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.377160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.377332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.377357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.377569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.377594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.377777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.377803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.378021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.378047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.378206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.378232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.378454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.378479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.378641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.378666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.378848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.378888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.379078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.379103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.379295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.379321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.379508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.379533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.379695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.379720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.379924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.379951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.380141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.380166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.380346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.380371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.380554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.380579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.380781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.380806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.381015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.381044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.381258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.381283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.381465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.381490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.381642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.381668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.381883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.381909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.382073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.382098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.382282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.382308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.382490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.382515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.382730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.382755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.382910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.382936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.383102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.383127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.383289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.383315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.383500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.383525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.383694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.383723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.383939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.383966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.384183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.384208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.384378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.384403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.384565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.384591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.384778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.384802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.384988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.385015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.385177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.385202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.385386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.385413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.385615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.385643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.385850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.385882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.386073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.386098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.386319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.386347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.386560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.386585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.386753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.386779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.386990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.387015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.387204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.387229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.387448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.387473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.387664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.387689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.387882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.387908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.388122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.388147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.388301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.388326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.388511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.388536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.388729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.388754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.388935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.388961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.389161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.123 [2024-07-14 09:44:32.389187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.123 qpair failed and we were unable to recover it. 00:34:48.123 [2024-07-14 09:44:32.389376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.389402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.389611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.389636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.389790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.389816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.389981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.390007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.390190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.390216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.390398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.390423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.390627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.390655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.390823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.390851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.391073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.391099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.391311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.391336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.391528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.391553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.391773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.391799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.391993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.392019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.392203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.392228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.392440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.392465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.392656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.392683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.392910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.392939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.393129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.393154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.393340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.393365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.393528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.393553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.393708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.393733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.393920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.393946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.394134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.394176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.394370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.394395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.394580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.394605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.394788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.394813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.394980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.395005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.395190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.395232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.395464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.395492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.395716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.395743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.395916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.395942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.396101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.396126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.396315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.396340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.396550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.396575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.396733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.396757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.396951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.396977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.397141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.397166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.397350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.397375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.397592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.397617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.397826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.397851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.398110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.398139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.398351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.398376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.398568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.398597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.398762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.398787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.398974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.399000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.399193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.399219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.399384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.399411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.399619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.399644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.399857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.399902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.400117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.400147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.400333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.400360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.400547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.400573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.400759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.400784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.400976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.401003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.401171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.401197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.401361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.401386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.401575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.401600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.401782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.401808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.401995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.402021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.402208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.402233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.402447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.402473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.402660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.402685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.402852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.402886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.403107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.403135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.403350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.403375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.403564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.403590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.403785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.403809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.404000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.404025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.404183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.404208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.404389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.404415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.404585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.404611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.404794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.404819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.124 [2024-07-14 09:44:32.405010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.124 [2024-07-14 09:44:32.405036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.124 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.405207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.405232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.405424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.405449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.405639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.405665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.405852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.405891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.406076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.406103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.406293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.406318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.406538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.406566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.406797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.406822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.407024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.407050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.407282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.407310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.407525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.407560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.407744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.407769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.407942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.407969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.408137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.408163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.408352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.408377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.408532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.408557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.408768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.408793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.408984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.409010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.409225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.409250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.409410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.409436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.409628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.409654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.409861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.409897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.410096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.410121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.410311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.410338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.410519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.410545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.410724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.410749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.410943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.410990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.411173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.411202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.411408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.411433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.411625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.411651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.411895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.411925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.412114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.412139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.412325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.412351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.412521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.412545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.412731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.412756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.412945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.412971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.413166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.413191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.413405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.413435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.413622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.413647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.413836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.413861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.414090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.414115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.414302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.414327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.414543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.414572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.414801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.414827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.415060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.415085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.415291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.415319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.415528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.415553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.415719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.415745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.415920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.415945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.416107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.416132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.416317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.416342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.416538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.416562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.416715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.416741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.416918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.416945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.417158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.417199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.417440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.417466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.417656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.417682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.417848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.417879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.418076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.418100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.418301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.418326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.418538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.418563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.418743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.418768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.418925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.418951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.419137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.419165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.419412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.419436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.419659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.419684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.419876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.419902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.420095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.420120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.420312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.420337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.420532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.420557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.420752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.420777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.420964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.420991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.421183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.421208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.421403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.421428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.421637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.421665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.125 [2024-07-14 09:44:32.421897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.125 [2024-07-14 09:44:32.421932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.125 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.422167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.422193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.422385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.422410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.422605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.422634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.422851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.422883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.423048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.423074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.423240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.423265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.423477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.423501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.423688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.423713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.423905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.423930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.424129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.424154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.424338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.424362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.424544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.424570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.424763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.424788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.424972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.424998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.425204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.425233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.425454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.425480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.425707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.425733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.425920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.425963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.426141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.426176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.426362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.426387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.426555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.426580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.426767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.426792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.426960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.426986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.427143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.427168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.427362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.427388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.427594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.427622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.427833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.427861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.428088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.428113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.428276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.428301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.428482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.428511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.428691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.428716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.428904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.428931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.429120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.429146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.429339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.429364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.429554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.429580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.429761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.429786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.429977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.430003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.430213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.430238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.430399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.430424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.430641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.430666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.430827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.430853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.431112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.431144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.431371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.431396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.431595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.431621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.431812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.431837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.432016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.432042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.432235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.432260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.432447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.432473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.432660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.432685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.432876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.432901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.433089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.433131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.433345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.433370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.433581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.433606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.433787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.433811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.433976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.434002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.434164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.434189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.434405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.434430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.434618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.434643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.434828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.434853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.435104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.435131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.435349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.435379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.435597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.435622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.435774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.435799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.436000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.436026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.436242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.436268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.436478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.436505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.436717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.436742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.436935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.436961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.437149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.437173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.437356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.437380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.437599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.437629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.437820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.437845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.438053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.438079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.438294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.438319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.438483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.438508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.438696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.438721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.438933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.438959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.439148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.439173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.126 qpair failed and we were unable to recover it. 00:34:48.126 [2024-07-14 09:44:32.439336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.126 [2024-07-14 09:44:32.439361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.439551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.439575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.439799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.439824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.439995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.440021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.440237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.440262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.440438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.440462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.440683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.440708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.440920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.440946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.441160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.441185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.441372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.441396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.441590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.441614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.441844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.441875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.442069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.442094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.442282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.442308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.442516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.442541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.442698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.442722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.442886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.442923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.443106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.443132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.443354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.443380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.443583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.443608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.443783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.443808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.443994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.444020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.444183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.444208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.444390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.444416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.444631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.444656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.444842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.444874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.445040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.445066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.445226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.445251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.445441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.445466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.445633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.445660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.445852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.445893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.446155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.446183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.446395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.446424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.446618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.446644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.446886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.446925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.447135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.447172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.447356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.447382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.447595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.447620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.447835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.447860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.448065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.448091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.448287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.448312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.448505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.448529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.448713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.448738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.448906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.448933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.449142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.449167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.449356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.449382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.449534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.449559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.449753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.449779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.449969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.449995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.450154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.450179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.450360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.450385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.450552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.450578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.450762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.450787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.451000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.451029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.451252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.451275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.451518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.451543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.451708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.451735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.451906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.451932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.452117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.452142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.452360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.452385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.452583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.452612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.452801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.452826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.453017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.453044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.453227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.453253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.453437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.453462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.453661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.453686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.453901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.453928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.454163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.454188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.454383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.454408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.454597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.454622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.454812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.454837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.455024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.455050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.455220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.455246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.455410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.455451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.455683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.455710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.455933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.455959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.456199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.456226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.456427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.456454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.456659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.456685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.456877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.456903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.457085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.457110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.457267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.457293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.457479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.457504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.457663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.127 [2024-07-14 09:44:32.457689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.127 qpair failed and we were unable to recover it. 00:34:48.127 [2024-07-14 09:44:32.457891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.457916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.458078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.458103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.458340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.458367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.458577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.458602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.458814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.458843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.459078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.459106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.459296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.459322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.459505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.459530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.459743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.459768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.459958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.459984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.460182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.460207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.460421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.460446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.460600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.460625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.460815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.460840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.461053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.461079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.461269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.461295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.461508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.461534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.461711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.461739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.461920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.461947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.462138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.462164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.462330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.462355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.462520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.462545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.462727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.462751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.462967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.462993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.463154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.463179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.463338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.463363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.463553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.463578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.463744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.463769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.463964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.463991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.464176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.464201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.464384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.464409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.464576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.464602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.464788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.464814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.465001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.465028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.465222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.465247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.465414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.465440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.465599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.465625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.465814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.465839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.466006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.466032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.466212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.466237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.466421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.466446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.466634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.466659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.466876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.466901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.467092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.467117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.467277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.467307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.467494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.467519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.467689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.467714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.467882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.467909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.468126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.468151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.468313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.468339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.468551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.468577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.468768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.468793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.468987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.469013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.469207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.469232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.469401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.469426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.469616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.469641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.469830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.469855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.470049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.470075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.470295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.470321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.470485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.470511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.470699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.470724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.470941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.470967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.471179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.471204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.471389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.471414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.471623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.471647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.471840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.471872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.472068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.472094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.472283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.472308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.472492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.472517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.472706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.472731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.472922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.472949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.473146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.473172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.473363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.473389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.473580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.473605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.473822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.473847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.474015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.474041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.474229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.128 [2024-07-14 09:44:32.474255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.128 qpair failed and we were unable to recover it. 00:34:48.128 [2024-07-14 09:44:32.474447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.474472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.474704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.474729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.474989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.475015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.475206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.475231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.475399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.475424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.475623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.475648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.475809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.475834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.476006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.476032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.476209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.476237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.476429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.476454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.476636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.476661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.476851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.476883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.477079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.477105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.477300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.477325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.477505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.477530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.477741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.477766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.477927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.477953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.478114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.478140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.478331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.478356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.478545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.478570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.478751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.478776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.478987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.479013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.479179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.479203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.479395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.479420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.479608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.479634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.479789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.479815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.480014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.480039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.480224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.480250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.480442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.480468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.480649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.480675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.480861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.480897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.481088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.481114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.481331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.481356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.481512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.481537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.481750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.481776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.481967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.481997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.482154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.482179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.482371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.482396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.482584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.482609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.482802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.482828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.482999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.483024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.483206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.483231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.483424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.483449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.483662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.483687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.483853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.483884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.484071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.484096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.484259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.484285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.484498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.484523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.484715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.484741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.484956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.484982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.485144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.485171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.485366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.485391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.485578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.485603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.485808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.485833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.486060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.486086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.486252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.486277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.486440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.486467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.486682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.486708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.486921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.486947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.487127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.487152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.487343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.487368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.487530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.487555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.487746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.487771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.487988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.488014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.488195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.488221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.488379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.488404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.488595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.488621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.488791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.488816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.489012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.489038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.489250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.489275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.489493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.489518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.489677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.489703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.489921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.489947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.490133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.490159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.490376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.490402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.490621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.490646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.490856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.490892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.491084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.491109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.491325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.491350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.129 [2024-07-14 09:44:32.491521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.129 [2024-07-14 09:44:32.491546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.129 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.491736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.491761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.491974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.492000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.492190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.492216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.492404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.492429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.492616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.492641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.492809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.492834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.493059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.493085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.493281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.493306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.493497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.493522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.493716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.493741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.493913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.493939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.494093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.494119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.494286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.494312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.494477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.494502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.494690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.494715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.494924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.494949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.495167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.495192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.495379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.495404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.495593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.495618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.495807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.495832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.495999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.496025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.496187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.496212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.496402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.496427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.496620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.496649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.496836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.496861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.497067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.497094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.497267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.497293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.497479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.497505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.497690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.497716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.497879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.497905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.498076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.498102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.498272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.498297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.498487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.498512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.498695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.498720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.498906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.498932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.499087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.499112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.499326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.499351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.499520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.499545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.499707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.499734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.499895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.499921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.500112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.500137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.500323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.500348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.500566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.500591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.500778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.500803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.500969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.500995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.501215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.501240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.501442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.501467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.501625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.501652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.501815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.501840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.502048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.502074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.502261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.502287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.502483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.502509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.502691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.502717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.502898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.502924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.503134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.503159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.503330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.503356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.503516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.503541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.503723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.503748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.503910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.503935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.504154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.504179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.504345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.504370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.504529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.504555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.504769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.504794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.504950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.504976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.505190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.505220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.505409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.505434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.505623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.505648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.505837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.505863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.506039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.506064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.506285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.506311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.506495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.506520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.506702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.506727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.506916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.130 [2024-07-14 09:44:32.506942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.130 qpair failed and we were unable to recover it. 00:34:48.130 [2024-07-14 09:44:32.507128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.507153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.507346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.507370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.507583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.507608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.507772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.507797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.507992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.508017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.508201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.508227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.508419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.508445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.508629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.508654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.508819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.508845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.509033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.509059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.509273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.509299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.509459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.509484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.509678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.509703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.509874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.509900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.510063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.510088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.510305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.510330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.510507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.510533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.510733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.510758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.510963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.510993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.511189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.511214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.511377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.511402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.511622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.511647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.511812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.511837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.512003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.512029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.512195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.512221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.512377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.512417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.512611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.512637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.512804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.512829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.513029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.513055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.513238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.513263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.513476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.513502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.513688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.513713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.513907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.513932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.514191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.514217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.514406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.514432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.514623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.514650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.514840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.514873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.515073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.515099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.515310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.515336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.515501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.515527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.515713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.515738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.515918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.515943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.516151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.516177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.516396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.516422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.516585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.516610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.516800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.516825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.517022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.517049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.517213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.517238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.517425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.517451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.517636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.517661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.517848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.517889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.518050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.518075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.518244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.518271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.518455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.518480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.518660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.518686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.518880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.518906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.519068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.519094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.519254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.519280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.519466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.519491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.519702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.519732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.519923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.519950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.520138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.520163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.520381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.520405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.520591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.520616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.520798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.520823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.521013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.521039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.521252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.521277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.521469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.521494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.521656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.521681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.521843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.521873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.522065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.522090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.522281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.522306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.522461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.522486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.522675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.522701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.522893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.522918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.523084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.523110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.523267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.523292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.523480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.523505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.523718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.523744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.523957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.523983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.524186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.524212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.524403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.524429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.524617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.524642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.524830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.524855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.525025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.525051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.525267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.131 [2024-07-14 09:44:32.525292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.131 qpair failed and we were unable to recover it. 00:34:48.131 [2024-07-14 09:44:32.525482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.525507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.525701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.525726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.525897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.525924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.526106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.526131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.526317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.526343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.526560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.526585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.526799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.526824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.527019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.527045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.527262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.527288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.527481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.527506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.527685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.527710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.527880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.527906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.528069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.528095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.528315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.528340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.528499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.528524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.528713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.528738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.528904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.528929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.529114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.529139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.529332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.529357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.529574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.529599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.529800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.529825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.530046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.530072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.530240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.530266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.530429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.530454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.530646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.530671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.530864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.530894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.531088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.531113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.531304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.531329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.531529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.531555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.531767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.531792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.531975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.532001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.532217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.532242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.532429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.532454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.532613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.532639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.532834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.532860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.533052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.533077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.533265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.533290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.533505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.533530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.533739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.533765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.533920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.533946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.534147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.534172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.534355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.534385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.534598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.534624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.534791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.534816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.534983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.535008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.535170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.535196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.535387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.535412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.535576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.535601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.535784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.535809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.536001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.536026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.536195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.536221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.536411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.536437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.536622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.536647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.536850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.536881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.537051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.537076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.537268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.537294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.537476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.537501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.537690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.537715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.537935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.537961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.538157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.538183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.538400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.538425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.538613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.538638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.538852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.538884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.539072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.539097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.539304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.539329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.539509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.539534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.539713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.539738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.539951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.539987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.540184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.540209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.540382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.540407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.540625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.540650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.540818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.540844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.541035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.541061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.541276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.541301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.541467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.541492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.541639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.541664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.541852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.541883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.542072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.542097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.542287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.542312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.542527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.542552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.132 qpair failed and we were unable to recover it. 00:34:48.132 [2024-07-14 09:44:32.542747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.132 [2024-07-14 09:44:32.542772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.542956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.542982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.543174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.543206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.543421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.543446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.543612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.543637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.543802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.543827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.544000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.544028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.544185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.544211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.544366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.544391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.544549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.544575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.544790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.544815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.545006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.545031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.545218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.545243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.545426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.545452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.545643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.545667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.545831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.545856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.546091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.546117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.546275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.546300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.546452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.546477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.546671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.546697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.546893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.546919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.547082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.547107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.547290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.547316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.547505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.547531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.547730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.547755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.547974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.548000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.548166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.548191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.548380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.548406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.548590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.548615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.548804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.548833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.549058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.549084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.549274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.549299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.549497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.549522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.549710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.549735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.549927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.549953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.550135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.550160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.550352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.550378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.550536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.550561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.550756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.550781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.550973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.550999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.551216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.551241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.551428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.551454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.551622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.551647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.551875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.551901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.552064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.552090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.552282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.552307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.552499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.552524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.552689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.552714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.552910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.552935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.553104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.553129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.553298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.553324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.553485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.553509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.553667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.553693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.553857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.553896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.554093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.554118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.554310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.554335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.554547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.554572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.554790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.554815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.554976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.555002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.555159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.555184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.555381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.555406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.555586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.555611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.555804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.555841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.556032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.556071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.556281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.556307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.556466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.556492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.556679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.556705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.556900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.556927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.557109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.557134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.557322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.557348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.557518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.557550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.557730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.557763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.557932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.557958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.558144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.558169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.558364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.558390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.558604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.558630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.558824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.558849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.559055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.559090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.559305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.559331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.133 [2024-07-14 09:44:32.559504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.133 [2024-07-14 09:44:32.559529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.133 qpair failed and we were unable to recover it. 00:34:48.401 [2024-07-14 09:44:32.559716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.401 [2024-07-14 09:44:32.559741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.401 qpair failed and we were unable to recover it. 00:34:48.401 [2024-07-14 09:44:32.559912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.401 [2024-07-14 09:44:32.559939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.401 qpair failed and we were unable to recover it. 00:34:48.401 [2024-07-14 09:44:32.560129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.401 [2024-07-14 09:44:32.560155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.401 qpair failed and we were unable to recover it. 00:34:48.401 [2024-07-14 09:44:32.560333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.401 [2024-07-14 09:44:32.560358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.401 qpair failed and we were unable to recover it. 00:34:48.401 [2024-07-14 09:44:32.560528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.401 [2024-07-14 09:44:32.560553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.401 qpair failed and we were unable to recover it. 00:34:48.401 [2024-07-14 09:44:32.560744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.401 [2024-07-14 09:44:32.560771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.401 qpair failed and we were unable to recover it. 00:34:48.401 [2024-07-14 09:44:32.560930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.401 [2024-07-14 09:44:32.560957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.401 qpair failed and we were unable to recover it. 00:34:48.401 [2024-07-14 09:44:32.561149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.401 [2024-07-14 09:44:32.561179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.401 qpair failed and we were unable to recover it. 00:34:48.401 [2024-07-14 09:44:32.561357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.401 [2024-07-14 09:44:32.561384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.401 qpair failed and we were unable to recover it. 00:34:48.401 [2024-07-14 09:44:32.561593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.401 [2024-07-14 09:44:32.561620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.401 qpair failed and we were unable to recover it. 00:34:48.401 [2024-07-14 09:44:32.561785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.401 [2024-07-14 09:44:32.561812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.401 qpair failed and we were unable to recover it. 00:34:48.401 [2024-07-14 09:44:32.561977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.401 [2024-07-14 09:44:32.562009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.401 qpair failed and we were unable to recover it. 00:34:48.401 [2024-07-14 09:44:32.562173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.401 [2024-07-14 09:44:32.562206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.401 qpair failed and we were unable to recover it. 00:34:48.401 [2024-07-14 09:44:32.562375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.401 [2024-07-14 09:44:32.562401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.401 qpair failed and we were unable to recover it. 00:34:48.401 [2024-07-14 09:44:32.562586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.401 [2024-07-14 09:44:32.562612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.401 qpair failed and we were unable to recover it. 00:34:48.401 [2024-07-14 09:44:32.562784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.401 [2024-07-14 09:44:32.562819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.401 qpair failed and we were unable to recover it. 00:34:48.401 [2024-07-14 09:44:32.562988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.401 [2024-07-14 09:44:32.563023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.401 qpair failed and we were unable to recover it. 00:34:48.401 [2024-07-14 09:44:32.563224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.401 [2024-07-14 09:44:32.563255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.401 qpair failed and we were unable to recover it. 00:34:48.401 [2024-07-14 09:44:32.563450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.401 [2024-07-14 09:44:32.563477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.401 qpair failed and we were unable to recover it. 00:34:48.401 [2024-07-14 09:44:32.563668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.401 [2024-07-14 09:44:32.563693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.401 qpair failed and we were unable to recover it. 00:34:48.401 [2024-07-14 09:44:32.563884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.401 [2024-07-14 09:44:32.563911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.401 qpair failed and we were unable to recover it. 00:34:48.401 [2024-07-14 09:44:32.564104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.401 [2024-07-14 09:44:32.564131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.401 qpair failed and we were unable to recover it. 00:34:48.401 [2024-07-14 09:44:32.564322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.401 [2024-07-14 09:44:32.564347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.401 qpair failed and we were unable to recover it. 00:34:48.401 [2024-07-14 09:44:32.564510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.401 [2024-07-14 09:44:32.564536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.401 qpair failed and we were unable to recover it. 00:34:48.401 [2024-07-14 09:44:32.564719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.401 [2024-07-14 09:44:32.564744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.401 qpair failed and we were unable to recover it. 00:34:48.401 [2024-07-14 09:44:32.564931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.401 [2024-07-14 09:44:32.564956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.401 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.565149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.565175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.565389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.565414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.565641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.565666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.565884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.565910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.566072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.566098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.566267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.566292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.566484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.566510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.566728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.566753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.566970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.566996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.567189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.567214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.567375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.567400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.567565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.567591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.567751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.567776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.567944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.567970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.568181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.568206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.568393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.568418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.568633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.568658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.568847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.568879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.569062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.569087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.569284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.569310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.569526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.569551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.569740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.569765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.569924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.569950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.570117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.570143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.570339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.570364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.570560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.570587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.570744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.570770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.570988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.571014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.571205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.571230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.571403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.571428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.571588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.571613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.571800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.571825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.571989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.572019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.572181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.572206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.572420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.572445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.572628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.572653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.572851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.572883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.573084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.573110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.573296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.573322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.573511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.573537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.573702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.573728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.573912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.573938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.574126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.574151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.574309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.574335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.574526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.574551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.574708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.574734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.574923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.574949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.575114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.575139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.575338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.575364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.575586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.575612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.575799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.575824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.576054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.576080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.576308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.576333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.576524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.576549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.576761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.576787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.576970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.576996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.577184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.577210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.577426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.577452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.577641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.577666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.577882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.577913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.578075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.578101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.578293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.578318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.578481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.578507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.578719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.578744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.578969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.578996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.579186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.579212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.579370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.579395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.579592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.579617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.579816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.579841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.580040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.580067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.580229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.580255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.580437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.580463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.580676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.580701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.580918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.580945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.581112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.581137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.581328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.581352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.581566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.581592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.581772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.581797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.581991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.582019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.582207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.582233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.582424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.582450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.582663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.582688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.582876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.582902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.583099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.583125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.583313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.583338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.583502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.583527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.402 [2024-07-14 09:44:32.583715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.402 [2024-07-14 09:44:32.583741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.402 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.583938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.583964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.584158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.584183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.584351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.584377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.584565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.584590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.584753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.584780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.585002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.585028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.585223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.585249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.585464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.585489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.585703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.585728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.585897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.585929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.586131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.586156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.586327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.586352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.586537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.586563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.586763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.586795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.586988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.587015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.587200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.587233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.587454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.587481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.587649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.587674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.587892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.587919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.588132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.588159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.588331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.588356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.588523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.588548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.588712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.588738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.588932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.588958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.589172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.589198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.589387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.589411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.589600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.589625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.589790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.589816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.590040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.590066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.590229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.590255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.590480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.590505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.590689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.590714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.590904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.590930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.591129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.591155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.591321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.591346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.591512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.591537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.591749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.591774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.591962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.591988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.592152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.592178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.592370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.592395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.592610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.592635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.592798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.592824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.592993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.593018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.593229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.593254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.593469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.593495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.593713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.593738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.593933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.593959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.594121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.594146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.594327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.594353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.594541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.594566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.594765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.594791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.594950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.594977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.595163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.595189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.595382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.595407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.595568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.595593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.595778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.595803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.595995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.596022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.596248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.596274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.596483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.596507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.596694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.596719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.596950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.596976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.597162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.597187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.597376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.597403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.597589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.597615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.597804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.597831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.598068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.598095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.598286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.598311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.598472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.598497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.598692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.598717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.598882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.403 [2024-07-14 09:44:32.598908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.403 qpair failed and we were unable to recover it. 00:34:48.403 [2024-07-14 09:44:32.599133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.599159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.599329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.599353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.599543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.599569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.599757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.599784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.599977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.600003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.600219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.600245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.600438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.600463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.600652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.600677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.600837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.600863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.601074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.601100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.601291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.601316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.601534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.601564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.601752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.601777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.601975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.602001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.602198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.602223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.602420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.602445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.602630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.602655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.602846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.602883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.603072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.603098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.603284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.603310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.603500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.603526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.603744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.603769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.603965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.603991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.604160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.604186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.604343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.604368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.604589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.604615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.604807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.604832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.605060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.605087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.605285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.605311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.605503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.605528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.605692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.605717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.605909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.605935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.606104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.606129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.606318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.606343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.606531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.606557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.606790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.606816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.607017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.607043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.607200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.607226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.607413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.607438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.607609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.607636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.607828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.607853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.608052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.608078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.608267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.608293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.608462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.608487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.608704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.608730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.608918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.608944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.609101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.609126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.609318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.609343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.609533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.609559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.609726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.609751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.609964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.609990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.610179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.610204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.610426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.610451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.610633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.610658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.610874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.610900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.611118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.611143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.611354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.611380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.611564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.611589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.611781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.611807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.611969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.611996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.612183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.612208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.612371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.612396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.612587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.612613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.612794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.612819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.613006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.613032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.613194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.613219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.613389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.613414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.613586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.613611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.613762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.613788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.613978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.614005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.614172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.614197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.614388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.614413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.614609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.614635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.614802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.614829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.615022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.615048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.615265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.615290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.615453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.615478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.615702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.404 [2024-07-14 09:44:32.615728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.404 qpair failed and we were unable to recover it. 00:34:48.404 [2024-07-14 09:44:32.615940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.615966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.616126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.616156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.616370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.616395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.616614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.616639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.616834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.616859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.617032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.617059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.617271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.617296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.617490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.617515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.617706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.617731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.617957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.617983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.618184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.618209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.618365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.618390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.618578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.618603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.618818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.618844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.619040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.619066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.619257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.619282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.619450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.619476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.619635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.619660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.619816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.619841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.620012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.620037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.620227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.620252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.620442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.620467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.620633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.620658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.620823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.620848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.621014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.621039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.621226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.621251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.621441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.621466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.621625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.621650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.621893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.621920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.622098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.622123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.622313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.622339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.622551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.622577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.622759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.622784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.622990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.623016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.623200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.623226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.623413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.623438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.623633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.623659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.623846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.623877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.624069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.624094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.624305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.624330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.624505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.624530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.624721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.624746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.624937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.624967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.625190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.625215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.625399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.625424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.625610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.625635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.625802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.625827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.626022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.626047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.626213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.626238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.626391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.626416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.626601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.626626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.626818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.626842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.627005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.627031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.627249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.627274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.627440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.627465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.627632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.627657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.627849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.627883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.628108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.628133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.628325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.628350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.628563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.628588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.628761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.628786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.628970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.628996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.629214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.629239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.629439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.629464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.629645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.629670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.629858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.629900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.630057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.630083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.630271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.630296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.630485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.630509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.630699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.630730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.630943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.630970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.631184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.631209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.631426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.631451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.631667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.631692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.631907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.631932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.632115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.632141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.632298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.632323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.632509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.632534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.632699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.632724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.632925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.632951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.633136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.633161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.405 qpair failed and we were unable to recover it. 00:34:48.405 [2024-07-14 09:44:32.633351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.405 [2024-07-14 09:44:32.633376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.633565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.633590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.633774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.633800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.633966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.633992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.634184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.634209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.634463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.634488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.634679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.634704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.634864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.634896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.635089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.635114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.635326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.635351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.635541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.635566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.635761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.635786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.635967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.635993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.636197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.636223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.636413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.636438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.636627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.636652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.636848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.636881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.637085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.637110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.637296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.637321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.637511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.637536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.637727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.637752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.637919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.637946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.638159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.638184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.638371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.638396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.638584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.638608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.638819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.638844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.639039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.639065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.639265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.639291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.639482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.639507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.639695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.639724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.639883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.639909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.640068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.640094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.640310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.640335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.640522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.640547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.640734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.640759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.640947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.640973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.641190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.641215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.641406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.641431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.641624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.641649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.641919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.641945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.642197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.642222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.642446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.642472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.642664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.642690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.642849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.642881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.643072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.643097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.643320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.643345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.643523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.643548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.643765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.643790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.643942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.643968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.644193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.644218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.644407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.644432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.644579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.644604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.644767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.644792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.644982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.645019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.645187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.645213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.645402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.645427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.645640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.645669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.645892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.645918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.646089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.646115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.646275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.646300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.646486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.646511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.646669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.646694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.646913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.646939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.647127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.647153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.647375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.647400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.647565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.647591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.647747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.647772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.647990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.648016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.648216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.648241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.648430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.648456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.648670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.648718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.406 [2024-07-14 09:44:32.648969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.406 [2024-07-14 09:44:32.648998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:48.406 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.649171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.649198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.649364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.649398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.649608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.649636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.649870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.649898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f166c000b90 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.650120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.650147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.650364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.650389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.650578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.650604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.650796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.650821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.650988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.651014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.651204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.651229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.651424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.651450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.651639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.651664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.651830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.651855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.652028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.652053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.652244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.652270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.652435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.652460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.652654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.652679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.652837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.652863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.653053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.653078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.653266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.653291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.653480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.653505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.653690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.653715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.653884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.653910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.654123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.654149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.654340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.654365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.654552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.654577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.654732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.654758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.654954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.654980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.655164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.655190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.655373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.655398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.655663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.655688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.655915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.655941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.656164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.656189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.656380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.656405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.656569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.656595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.656765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.656790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.657004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.657030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.657243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.657269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.657432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.657457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.657617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.657642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.657835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.657860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.658091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.658116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.658304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.658331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.658519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.658545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.658709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.658734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.658896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.658923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.659086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.659112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.659298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.659323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.659484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.659510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.659720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.659745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.659936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.659962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.660122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.660147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.660310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.660339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.660553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.660578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.660793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.660819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.661037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.661062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.661281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.661306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.661493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.661518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.661725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.661750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.661941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.661967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.662158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.662183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.662373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.662398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.662556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.662582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.662764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.662789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.663002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.663029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.663214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.663239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.663432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.663458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.663652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.663677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.663863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.663895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.664056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.664081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.664274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.664299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.664462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.664487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.664676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.664701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.664895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.664920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.665115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.665141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.665333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.665358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.665565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.665590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.665753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.665779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.665998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.407 [2024-07-14 09:44:32.666024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.407 qpair failed and we were unable to recover it. 00:34:48.407 [2024-07-14 09:44:32.666195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.666220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.666381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.666406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.666561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.666586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.666774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.666799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.666968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.666995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.667182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.667207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.667368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.667393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.667553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.667592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.667792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.667817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.667992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.668019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.668181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.668207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.668392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.668417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.668616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.668642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.668795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.668820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.669001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.669031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.669195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.669220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.669387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.669412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.669596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.669622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.669778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.669804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.669974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.670000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.670160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.670185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.670350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.670375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.670567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.670592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.670753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.670779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.670974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.671000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.671165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.671190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.671357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.671382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.671574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.671600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.671773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.671799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.671997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.672023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.672215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.672240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.672432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.672457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.672615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.672640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.672827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.672852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.673052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.673077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.673238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.673263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.673449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.673474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.673667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.673692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.673845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.673876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.674079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.674105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.674270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.674296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.674480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.674512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.674676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.674701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.674864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.674911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.675106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.675131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.675356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.675381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.675546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.675571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.675732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.675757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.675946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.675972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.676164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.676190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.676362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.676386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.676605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.676630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.676897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.676922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.677113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.677138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.677304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.677329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.677548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.677573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.677762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.677787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.678055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.678081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.678301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.678326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.678507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.678531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.678792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.678817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.679046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.679072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.679263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.679289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.679574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.679613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.679887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.679913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.680102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.680128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.680292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.680317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.680522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.680546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.680788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.680813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.408 [2024-07-14 09:44:32.681035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.408 [2024-07-14 09:44:32.681061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.408 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.681252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.681278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.681487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.681512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.681700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.681725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.681885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.681911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.682098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.682123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.682315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.682341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.682557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.682583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.682779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.682803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.683017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.683043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.683236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.683261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.683473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.683498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.683693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.683719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.683900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.683930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.684122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.684148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.684337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.684362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.684548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.684574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.684758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.684784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.684973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.684999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.685191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.685215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.685374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.685399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.685555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.685581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.685775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.685800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.685960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.685985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.686174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.686199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.686390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.686415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.686571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.686596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.686819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.686844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.687017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.687043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.687236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.687261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.687447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.687472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.687661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.687686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.687856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.687888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.688158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.688183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.688427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.688452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.688641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.688666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.688851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.688883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.689070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.689095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.689292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.689317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.689530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.689556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.689744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.689773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.689962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.689988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.690179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.690205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.690410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.690435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.690625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.690650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.690877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.690902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.691178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.691203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.691391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.691416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.691609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.691634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.691851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.691881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.692078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.692103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.692314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.692340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.692526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.692551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.692715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.692742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.692941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.692967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.693158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.693183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.693374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.693400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.693561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.693586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.693768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.693792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.693979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.694006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.694167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.694193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.694408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.694433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.694598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.694623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.694810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.694835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.695025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.695050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.695244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.695269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.695457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.695482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.695695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.695720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.695924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.695950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.696140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.696165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.696326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.696351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.696563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.696589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.696767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.696792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.696984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.697010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.697198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.697223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.697438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.697463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.697677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.697703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.697877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.697903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.698127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.698152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.698343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.698369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.698559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.698583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.698772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.698801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.409 [2024-07-14 09:44:32.698991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.409 [2024-07-14 09:44:32.699017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.409 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.699227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.699252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.699439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.699464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.699657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.699682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.699850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.699883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.700073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.700099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.700260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.700285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.700445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.700470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.700661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.700686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.700845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.700875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.701068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.701095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.701287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.701313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.701530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.701555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.701752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.701777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.701963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.701989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.702193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.702218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.702388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.702413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.702626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.702651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.702844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.702879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.703070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.703097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.703315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.703341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.703503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.703528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.703719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.703744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.703913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.703939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.704098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.704125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.704314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.704339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.704526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.704556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.704764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.704789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.704984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.705010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.705230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.705255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.705422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.705447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.705637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.705667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.705825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.705850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.706032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.706058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.706255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.706280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.706445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.706471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.706664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.706689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.706886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.706912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.707107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.707132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.707319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.707344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.707560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.707586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.707774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.707799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.707998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.708024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.708245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.708271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.708429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.708455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.708674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.708700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.708907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.708933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.709119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.709144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.709305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.709331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.709513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.709539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.709724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.709749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.709905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.709930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.710127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.710153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.710344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.710369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.710589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.710614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.710775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.710800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.710959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.710986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.711140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.711166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.711354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.711379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.711571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.711596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.711808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.711834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.712066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.712091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.712276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.712302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.712518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.712543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.712727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.712752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.712920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.712947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.713143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.713168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.713333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.713364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.713553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.713579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.713804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.713829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.714063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.410 [2024-07-14 09:44:32.714090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.410 qpair failed and we were unable to recover it. 00:34:48.410 [2024-07-14 09:44:32.714251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.714276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.714447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.714472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.714663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.714689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.714845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.714877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.715067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.715092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.715281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.715306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.715493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.715518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.715703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.715728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.715909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.715935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.716149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.716174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.716384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.716410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.716629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.716654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.716845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.716875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.717065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.717090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.717249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.717273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.717487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.717512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.717703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.717728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.717909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.717935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.718114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.718139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.718359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.718384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.718566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.718592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.718778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.718804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.719003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.719029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.719217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.719242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.719413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.719439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.719706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.719731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.719925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.719951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.720145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.720172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.720333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.720359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.720540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.720565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.720734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.720759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.720955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.720981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.721172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.721198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.721422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.721447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.721645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.721670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.721833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.721858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.722067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.722092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.722266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.722292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.722516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.722541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.722735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.722760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.722918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.722944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.723140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.723165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.723328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.723353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.723514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.723539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.723736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.723761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.723977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.724003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.724185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.724210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.724367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.724392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.724576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.724601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.724762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.724788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.725003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.725028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.725216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.725242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.725399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.725424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.725624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.725649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.725838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.725863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.726031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.726057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.726239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.726264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.726458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.726484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.726699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.726724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.726920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.726945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.727131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.727157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.727343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.727368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.727553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.727578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.727747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.727773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.727973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.728002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.728222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.728247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.728408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.728434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.728636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.728661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.728847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.728878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.729043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.729068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.729255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.729280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.729499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.729524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.729718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.729744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.729926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.729952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.730144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.730169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.730326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.730351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.730572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.730597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.730789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.730814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.731007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.731034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.731231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.731256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.731471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.731497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.411 [2024-07-14 09:44:32.731687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.411 [2024-07-14 09:44:32.731712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.411 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.731903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.731929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.732118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.732144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.732338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.732364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.732555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.732580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.732839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.732869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.733087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.733113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.733302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.733327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.733514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.733539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.733745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.733770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.733986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.734012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.734204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.734230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.734416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.734441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.734626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.734652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.734873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.734899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.735097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.735122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.735314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.735340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.735502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.735527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.735693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.735718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.735913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.735939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.736103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.736142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.736370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.736396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.736583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.736608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.736808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.736833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.737001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.737031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.737220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.737246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.737446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.737471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.737686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.737712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.737888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.737915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.738114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.738140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.738356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.738381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.738574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.738599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.738784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.738809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.738995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.739020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.739187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.739212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.739377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.739402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.739617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.739642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.739833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.739858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.740062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.740088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.740270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.740296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.740520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.740545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.740704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.740728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.740913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.740939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.741112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.741139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.741326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.741352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.741519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.741544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.741735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.741760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.741916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.741942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.742108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.742133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.742342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.742367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.742552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.742577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.742772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.742802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.742961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.742988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.743198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.743224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.743418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.743443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.743631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.743657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.743852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.743883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.744077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.744102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.744296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.744321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.744504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.744529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.744691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.744716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.744905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.744932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.745115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.745140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.745331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.745356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.745543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.745568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.745759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.745784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.745977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.746003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.746194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.746220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.746416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.746441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.746613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.746638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.746805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.746831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.747026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.747051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.747231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.747256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.747448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.747473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.747630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.747655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.747818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.747843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.748042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.748075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.748273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.748298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.748459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.748484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.412 qpair failed and we were unable to recover it. 00:34:48.412 [2024-07-14 09:44:32.748702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.412 [2024-07-14 09:44:32.748727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.748889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.748915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.749130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.749155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.749339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.749364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.749529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.749554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.749721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.749746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.749934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.749960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.750124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.750152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.750319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.750344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.750529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.750555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.750741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.750766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.750926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.750952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.751112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.751138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.751296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.751325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.751539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.751564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.751734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.751760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.751918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.751946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.752156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.752181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.752349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.752375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.752548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.752573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.752760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.752785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.752947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.752974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.753140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.753165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.753349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.753375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.753563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.753588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.753778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.753804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.753957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.753983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.754179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.754205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.754388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.754413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.754598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.754623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.754812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.754837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.755032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.755057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.755248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.755273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.755455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.755480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.755674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.755699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.755893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.755919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.756109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.756134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.756352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.756377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.756568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.756593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.756749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.756774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.756968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.756999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.757163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.757189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.757384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.757409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.757599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.757624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.757787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.757812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.758026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.758052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.758267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.758292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.758474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.758500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.758665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.758691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.758916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.758943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.759141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.759167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.759333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.759358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.759571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.759597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.759784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.759809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.759976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.760002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.760189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.760215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.760408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.760434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.760625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.760651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.760872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.760899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.761089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.761114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.761312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.761338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.761538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.761564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.761749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.761774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.761967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.761993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.762178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.762204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.762423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.762448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.762645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.762671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.762896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.762922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.763125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.763151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.763337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.763364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.763578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.763603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.763800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.763825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.764059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.764085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.764251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.764276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.764468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.764494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.764653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.764678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.764837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.764862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.765024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.765049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.765253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.413 [2024-07-14 09:44:32.765278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.413 qpair failed and we were unable to recover it. 00:34:48.413 [2024-07-14 09:44:32.765470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.765495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.765676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.765702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.765880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.765911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.766084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.766109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.766269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.766293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.766490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.766516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.766734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.766759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.766952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.766978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.767171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.767196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.767355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.767381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.767598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.767623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.767811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.767836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.768063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.768089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.768291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.768317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.768514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.768539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.768741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.768767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.768964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.768990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.769182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.769207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.769395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.769421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.769608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.769633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.769820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.769845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.770012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.770038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.770203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.770228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.770421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.770446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.770632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.770658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.770821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.770847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.771014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.771040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.771201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.771227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.771387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.771412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.771627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.771656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.771813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.771838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.772034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.772060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.772251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.772278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.772438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.772464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.772654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.772679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.772861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.772894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.773060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.773085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.773298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.773323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.773488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.773514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.773705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.773730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.773944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.773970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.774132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.774158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.774316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.774341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.774525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.774551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.774767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.774792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.774957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.774982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.775172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.775198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.775379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.775405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.775588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.775613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.775814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.775839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.776028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.776054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.776218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.776243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.776407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.776432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.776620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.776646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.776826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.776851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.777072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.777098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.777279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.777305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.777497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.777522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.777736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.777761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.777946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.777972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.778131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.778156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.778311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.778336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.778524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.778549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.778742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.778768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.778928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.778954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.779116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.779141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.779307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.779334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.779551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.779577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.779769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.779795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.779952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.779978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.780194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.780223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.780412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.780437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.780600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.780625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.780839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.780864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.414 qpair failed and we were unable to recover it. 00:34:48.414 [2024-07-14 09:44:32.781056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.414 [2024-07-14 09:44:32.781082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.781276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.781302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.781515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.781540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.781731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.781756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.781943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.781970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.782183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.782208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.782401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.782427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.782586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.782611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.782824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.782850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.783047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.783072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.783244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.783269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.783433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.783460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.783649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.783675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.783860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.783891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.784075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.784101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.784313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.784338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.784499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.784524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.784683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.784709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.784921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.784947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.785139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.785164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.785350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.785376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.785561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.785586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.785774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.785800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.786016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.786042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.786237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.786262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.786451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.786476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.786631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.786657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.786821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.786846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.787048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.787074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.787262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.787288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.787450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.787475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.787670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.787696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.787880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.787906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.788101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.788126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.788281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.788306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.788509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.788535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.788722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.788748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.788909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.788935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.789088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.789114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.789296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.789322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.789508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.789534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.789749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.789774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.789963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.789990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.790173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.790199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.790358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.790383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.790598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.790623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.790817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.790842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.791016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.791041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.791225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.791250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.791465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.791491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.791659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.791685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.791878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.791904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.792089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.792114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.792307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.792333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.792546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.792572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.792762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.792788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.792982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.793008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.793197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.793223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.793432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.793458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.793640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.793665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.793826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.793852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.794083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.794109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.794294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.794319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.794507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.794533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.794721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.794750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.794910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.794936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.795127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.795152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.795345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.795371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.795589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.795614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.795806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.795831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.796025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.796051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.415 [2024-07-14 09:44:32.796236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.415 [2024-07-14 09:44:32.796261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.415 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.796474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.796499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.796689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.796714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.796930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.796956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.797119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.797144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.797361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.797386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.797568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.797593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.797793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.797819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.798022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.798048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.798233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.798259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.798452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.798478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.798642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.798667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.798825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.798850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.799023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.799049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.799242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.799268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.799433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.799458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.799675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.799700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.799895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.799921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.800078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.800103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.800267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.800292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.800451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.800478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.800698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.800724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.800942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.800968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.801129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.801154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.801317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.801343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.801538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.801564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.801749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.801774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.801964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.801990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.802175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.802200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.802394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.802419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.802612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.802637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.802822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.802848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.803015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.803041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.803235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.803260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.803451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.803476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.803701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.803726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.803916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.803942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.804130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.804156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.804345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.804370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.804558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.804584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.804751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.804777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.804997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.805024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.805211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.805236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.805430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.805455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.805641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.805666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.805853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.805894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.806077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.806102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.806263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.806288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.806481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.806507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.806664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.806690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.806854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.806886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.807071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.807096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.807287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.807312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.807532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.807557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.807771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.807796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.807970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.807996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.808161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.808187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.808380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.808405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.808588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.808614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.808805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.808830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.808999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.809025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.809242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.809272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.809444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.809470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.809633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.809658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.809846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.809877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.810067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.810092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.810303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.810329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.810544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.810569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.810736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.810761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.815055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.815094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.815326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.815354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.815525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.815551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.815740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.815765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.815969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.815996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.816184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.816210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.816430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.816456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.816627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.816652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.816872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.816899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.817113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.817138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.416 [2024-07-14 09:44:32.817328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.416 [2024-07-14 09:44:32.817353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.416 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.817513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.817538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.817730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.817755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.817917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.817943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.818108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.818134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.818325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.818351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.818506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.818532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.818724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.818751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.818939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.818965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.819135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.819160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.819381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.819406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.819595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.819620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.819812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.819838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.820036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.820062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.820252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.820277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.820490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.820515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.820709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.820734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.820903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.820930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.821115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.821141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.821326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.821351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.821507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.821532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.821722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.821747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.821946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.821972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.822165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.822195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.822389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.822415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.822635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.822660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.822838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.822887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.823085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.823110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.823281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.823307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.823466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.823491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.823677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.823702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.823869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.823895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.824090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.824116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.824277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.824302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.824493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.824518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.824779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.824804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.824995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.825022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.825189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.825215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.825428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.825454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.825635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.825660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.825880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.825906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.826090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.826115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.826331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.826357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.826548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.826573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.826786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.826811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.826975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.827000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.827160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.827200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.827398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.827423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.827611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.827636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.827798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.827824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.828009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.828039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.828247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.828273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.828457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.828482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.828682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.828707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.828864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.828896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.829054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.829080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.829298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.829323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.829509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.829535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.829751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.829777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.829966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.829992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.830186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.830212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.830373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.830399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.830592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.830617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.830780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.830806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.830966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.830992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.831183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.417 [2024-07-14 09:44:32.831208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.417 qpair failed and we were unable to recover it. 00:34:48.417 [2024-07-14 09:44:32.831381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.831406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.831625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.831650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.831813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.831839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.832065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.832091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.832249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.832274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.832490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.832515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.832704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.832729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.832921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.832947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.833146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.833171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.833364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.833390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.833584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.833609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.833792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.833817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.833997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.834023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.834215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.834240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.834431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.834456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.834672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.834697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.834885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.834911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.835106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.835131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.835315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.835340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.835519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.835544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.835761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.835786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.835977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.836003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.836192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.836218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.836399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.836424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.836627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.836652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.836815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.836845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.837033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.837059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.837242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.837267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.837455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.837480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.837667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.837692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.837878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.837903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.838086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.838112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.838269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.838294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.838492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.838517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.838712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.838737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.838928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.838954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.839119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.839144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.839334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.839359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.839524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.839560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.839782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.839809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.839971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.839997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.840161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.840186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.840344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.840369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.840589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.840616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.840831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.840856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.841017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.841043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.841240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.841271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.841438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.841473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.841669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.841695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.841881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.841907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.842076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.842102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.842266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.842292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.842505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.842534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.842696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.842721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.842943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.842977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.418 [2024-07-14 09:44:32.843152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.418 [2024-07-14 09:44:32.843178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.418 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.843371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.843397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.843566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.843591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.843776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.843801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.843995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.844021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.844186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.844212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.844378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.844405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.844613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.844651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.844824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.844850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.845071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.845097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.845293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.845321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.845498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.845525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.845708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.845736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.845924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.845950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.846108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.846133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.846299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.846326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.846519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.846546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.846737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.846763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.846925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.846952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.847107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.847133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.847318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.847345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.847542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.847569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.847762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.847788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.848012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.848039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.848229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.848255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.848413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.848438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.848621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.848647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.848833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.848858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.849026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.849052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.849208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.849233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.849420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.849446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.849606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.849631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.849829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.849854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.850050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.850076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.850264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.850289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.850497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.850523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.850703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.850729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.850916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.850943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.851105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.851135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.851324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.851349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.851537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.851562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.851716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.851741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.851960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.851998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.852189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.852215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.852408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.852435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.852627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.852653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.852841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.852873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.853071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.853097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.853280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.853305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.853494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.853519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.853681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.853706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.853862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.853903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.854071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.854097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.854315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.854341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.854528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.854553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.854737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.854762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.854978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.855004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.855163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.855189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.855393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.855418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.855653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.855679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.855880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.855906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.856085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.856109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.856304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.856329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.856495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.856520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.856742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.856767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.856958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.856988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.857175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.857201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.857351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.857376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.857603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.857629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.857792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.857817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.858006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.858032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.858258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.858283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.858506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.858532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.858694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.858719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.694 [2024-07-14 09:44:32.858880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.694 [2024-07-14 09:44:32.858906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.694 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.859099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.859124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.859311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.859337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.859552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.859577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.859736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.859761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.859956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.859982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.860171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.860197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.860388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.860413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.860600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.860625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.860818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.860843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.861020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.861046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.861238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.861265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.861455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.861480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.861653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.861678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.861886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.861913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.862098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.862124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.862278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.862304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.862495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.862522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.862688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.862714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.862910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.862937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.863093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.863118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.863306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.863332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.863498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.863524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.863727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.863752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.863919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.863945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.864131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.864156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.864376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.864402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.864560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.864585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.864755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.864781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.864967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.864993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.865162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.865188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.865372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.865397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.865555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.865585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.865773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.865799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.866010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.866036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.866261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.866287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.866480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.866505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.866719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.866744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.866935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.866961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.867150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.867176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.867344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.867371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.867531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.867556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.867756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.867781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.867943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.867971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.868168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.868194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.868383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.868409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.868630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.868655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.868821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.868847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.869043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.869069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.869235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.869261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.869450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.869475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.869662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.869687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.869883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.869909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.870105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.870130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.870318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.870344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.870530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.870557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.870743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.870768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.870933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.870959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.871175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.871200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.871412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.871437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.871611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.871637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.871802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.871837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.872067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.872094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.872336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.872362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.872557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.872584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.872772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.872799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.872984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.873010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.873184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.873210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.873399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.873426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.873593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.873620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.873779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.873804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.873968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.873995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.874186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.874211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.874375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.874401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.874599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.874624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.874788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.874813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.874967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.874994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.875182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.875207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.875373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.875398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.875588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.875614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.875822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.875848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.876046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.876072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.695 [2024-07-14 09:44:32.876258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.695 [2024-07-14 09:44:32.876283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.695 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.876469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.876495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.876709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.876735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.876897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.876923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.877114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.877139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.877337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.877363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.877584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.877609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.877830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.877855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.878046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.878071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.878258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.878283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.878485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.878511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.878717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.878742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.878938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.878964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.879125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.879151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.879343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.879368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.879562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.879588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.879752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.879778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.879975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.880001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.880168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.880198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.880388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.880413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.880600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.880625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.880845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.880876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.881080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.881105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.881262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.881287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.881477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.881503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.881657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.881683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.881879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.881905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.882070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.882096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.882315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.882339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.882524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.882550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.882702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.882727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.882896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.882923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.883121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.883147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.883340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.883365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.883581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.883606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.883786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.883812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.883985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.884010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.884172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.884197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.884376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.884401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.884568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.884593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.884787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.884812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.885039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.885066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.885236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.885261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.885448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.885473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.885633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.885658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.885888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.885915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.886118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.886143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.886360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.886385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.886552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.886577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.886794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.886819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.886987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.887013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.887207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.887233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.887400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.887426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.887617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.887643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.887802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.887827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.888028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.888054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.888219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.888244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.888435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.888461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.888653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.888678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.888862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.888899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.889096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.889122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.889337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.889361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.889574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.889599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.889751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.889776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.889992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.890027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.890231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.890256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.890451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.890477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.890636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.890662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.890859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.890890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.891082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.891107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.891289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.891314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.891499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.891525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.891713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.891738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.891942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.891969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.892183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.892208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.892393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.892419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.892632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.892657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.892847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.892878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.893082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.893107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.696 [2024-07-14 09:44:32.893275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.696 [2024-07-14 09:44:32.893300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.696 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.893486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.893511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.893724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.893750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.893916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.893942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.894130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.894156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.894345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.894370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.894526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.894551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.894767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.894797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.894995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.895021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.895206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.895231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.895417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.895443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.895632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.895657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.895876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.895902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.896101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.896126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.896310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.896336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.896491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.896516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.896740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.896765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.896923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.896949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.897145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.897170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.897361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.897386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.897570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.897594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.897813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.897839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.898029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.898055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.898224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.898249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.898436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.898461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.898671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.898697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.898862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.898895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.899087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.899112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.899298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.899323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.899480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.899505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.899690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.899715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.899879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.899905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.900104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.900129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.900312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.900338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.900556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.900581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.900771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.900797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.900968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.901000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.901192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.901218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.901377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.901403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.901566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.901593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.901749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.901774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.902000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.902026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.902252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.902278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.902441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.902466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.902630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.902655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.902817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.902842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.903053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.903079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.903247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.903272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.903464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.903495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.903678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.903704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.903873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.903900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.904093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.904119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.904280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.904305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.904500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.904525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.904718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.904743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.904956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.904982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.905158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.905185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.905349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.905374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.905566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.905593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.905813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.905839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.906073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.906098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.906290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.906315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.906542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.906567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.906784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.906809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.906976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.907013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.907176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.907201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.907361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.907388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.907581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.907607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.907816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.907841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.908040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.908066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.908260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.908286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.908475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.908501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.908672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.908697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.908890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.908916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.909079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.909105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.909320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.909349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.909539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.909564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.909731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.909757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.909950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.909977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.910145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.910171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.910356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.910382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.910539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.910564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.910729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.910755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.910968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.910994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.911152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.911177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.697 [2024-07-14 09:44:32.911364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.697 [2024-07-14 09:44:32.911389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.697 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.911579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.911604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.911792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.911817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.911985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.912016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.912221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.912246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.912445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.912470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.912683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.912708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.912876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.912903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.913093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.913118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.913284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.913310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.913496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.913522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.913685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.913710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.913929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.913955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.914163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.914189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.914377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.914402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.914567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.914592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.914778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.914804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.915014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.915041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.915202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.915228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.915443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.915469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.915636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.915662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.915850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.915881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.916074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.916099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.916263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.916288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.916446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.916472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.916659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.916685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.916875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.916900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.917116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.917141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.917326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.917352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.917536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.917561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.917753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.917779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.917945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.917976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.918188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.918213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.918382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.918408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.918573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.918599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.918787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.918812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.918980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.919006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.919163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.919189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.919399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.919424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.919590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.919615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.919778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.919803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.919962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.919988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.920182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.920208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.920366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.920391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.920559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.920585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.920776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.920802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.921017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.921042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.921231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.921257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.921471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.921497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.921686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.921711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.921894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.921920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.922087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.922127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.922348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.922374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.922528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.922554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.922711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.922737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.922924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.922950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.923105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.923131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.923362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.923387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.923618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.923648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.923839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.923870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.924064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.924089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.924268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.924294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.924448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.924474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.924637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.924662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.924852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.924883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.925067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.925092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.925248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.925273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.925461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.925487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.925668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.925693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.925861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.925903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.926094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.926119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.926308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.926334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.926528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.926553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.926719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.926744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.926931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.926957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.927138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.927163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.927347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.927371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.927582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.927606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.927803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.927829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.928017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.928042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.928243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.928269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.928493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.928519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.698 [2024-07-14 09:44:32.928711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.698 [2024-07-14 09:44:32.928737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.698 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.928902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.928928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.929117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.929142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.929323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.929348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.929566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.929592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.929750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.929776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.929944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.929969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.930130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.930156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.930349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.930375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.930535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.930560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.930719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.930744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.930935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.930962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.931178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.931203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.931365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.931391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.931563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.931587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.931812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.931837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.932035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.932061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.932282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.932312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.932493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.932519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.932683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.932709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.932898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.932924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.933138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.933163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.933350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.933375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.933541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.933566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.933751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.933776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.933973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.933999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.934190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.934216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.934406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.934432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.934619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.934645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.934810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.934835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.935026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.935052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.935268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.935294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.935484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.935509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.935695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.935721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.935936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.935962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.936124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.936149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.936349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.936375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.936538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.936563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.936747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.936772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.936979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.937005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.937170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.937195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.937408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.937434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.937593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.937618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.937810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.937836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.938032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.938058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.938257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.938283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.938470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.938495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.938692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.938717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.938876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.938902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.939092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.939117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.939332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.939357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.939541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.939566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.939758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.939783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.939971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.939997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.940162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.940187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.940370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.940395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.940587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.940613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.940772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.940797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.941001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.941027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.941188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.941214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.941432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.941457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.941615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.941640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.941804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.941829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.942053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.942079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.942266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.942291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.942472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.942498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.942695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.942720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.942912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.942938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.943089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.943114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.943274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.943299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.943455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.943480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.943675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.943700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.943862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.943895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.944082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.944107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.944324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.944349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.944539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.944564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.944723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.944748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.944941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.944967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.945163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.945189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.945344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.945369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.945526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.945552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.945734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.945759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.945956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.945982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.946163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.699 [2024-07-14 09:44:32.946188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.699 qpair failed and we were unable to recover it. 00:34:48.699 [2024-07-14 09:44:32.946385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.946412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.946607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.946636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.946827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.946852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.947048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.947074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.947289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.947315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.947527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.947552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.947708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.947733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.947957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.947983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.948150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.948176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.948379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.948404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.948590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.948616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.948829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.948854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.949091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.949116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.949308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.949333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.949544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.949570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.949740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.949766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.949956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.949982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.950149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.950175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.950365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.950390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.950545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.950571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.950830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.950855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.951041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.951067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.951232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.951258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.951452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.951477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.951640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.951665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.951884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.951919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.952077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.952103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.952295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.952320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.952482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.952507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.952702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.952729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.952957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.952983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.953177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.953202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.953387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.953413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.953592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.953617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.953848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.953881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.954070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.954095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.954260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.954286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.954503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.954529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.954692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.954719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.954916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.954943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.955134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.955159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.955353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.955380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.955534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.955563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.955720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.955760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.955984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.956011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.956221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.956247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.956434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.956459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.956652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.956679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.956871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.956918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.957112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.957139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.957300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.957325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.957547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.957573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.957814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.957840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.958040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.958066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.958260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.958286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.958504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.958529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.958724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.958749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.958922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.958949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.959136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.959162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.959325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.959350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.959513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.959538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.959731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.959757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.959944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.959971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.960163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.960189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.960389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.960415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.960573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.960598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.960762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.960787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.960969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.960996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.961214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.961239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.961410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.961440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.961603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.961629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.961796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.700 [2024-07-14 09:44:32.961821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.700 qpair failed and we were unable to recover it. 00:34:48.700 [2024-07-14 09:44:32.962042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.962069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.962262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.962288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.962470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.962495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.962656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.962681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.962837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.962863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.963033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.963058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.963225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.963250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.963437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.963462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.963647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.963672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.963886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.963912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.964107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.964133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.964297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.964323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.964515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.964541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.964721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.964746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.964939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.964966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.965120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.965146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.965366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.965392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.965612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.965638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.965794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.965820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.966015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.966042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.966231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.966256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.966429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.966454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.966623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.966649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.966874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.966900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.967125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.967151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.967322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.967349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.967549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.967575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.967760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.967785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.967970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.967997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.968211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.968237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.968424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.968449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.968635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.968660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.968877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.968903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.969077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.969103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.969291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.969317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.969468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.969494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.969704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.969730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.969895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.969921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.970115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.970145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.970302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.970327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.970536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.970561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.970745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.970771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.970928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.970954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.971111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.971136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.971317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.971342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.971510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.971536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.971703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.971729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.971892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.971918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.972073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.972098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.972292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.972318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.972547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.972573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.972761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.972786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.972978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.973004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.973173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.973198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.973389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.973415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.973631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.973657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.973849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.973881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.974067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.974093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.974252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.974277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.974439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.974465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.974682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.974707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.974864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.974895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.975079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.975106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.975326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.975351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.975537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.975562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.975728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.975757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.975982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.976008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.976191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.976217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.976381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.976407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.976603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.976628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.976810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.976836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.977037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.977063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.977246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.977272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.977484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.977510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.977731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.977757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.977946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.977972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.978130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.978156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.978342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.978367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.978559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.701 [2024-07-14 09:44:32.978586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.701 qpair failed and we were unable to recover it. 00:34:48.701 [2024-07-14 09:44:32.978816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.978842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.979036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.979062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.979245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.979271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.979459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.979484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.979671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.979697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.979881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.979908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.980096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.980121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.980277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.980303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.980455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.980481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.980670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.980696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.980894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.980920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.981087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.981113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.981302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.981328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.981479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.981504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.981694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.981719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.981909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.981935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.982089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.982115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.982334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.982360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.982546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.982572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.982786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.982812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.982979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.983006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.983195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.983221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.983381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.983407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.983601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.983627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.983810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.983836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.984046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.984073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.984255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.984281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.984465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.984497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.984658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.984684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.984878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.984904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.985117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.985142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.985351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.985377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.985589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.985614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.985800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.985825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.986016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.986042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.986256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.986282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.986470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.986496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.986686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.986712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.986895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.986922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.987112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.987137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.987325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.987351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.987555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.987581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.987768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.987794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.987986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.988012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.988174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.988200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.988385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.988410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.988582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.988608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.988822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.988848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.989045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.989071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.989258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.989284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.989442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.989467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.989658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.989683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.989847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.989879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.990098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.990124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.990281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.990310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.990518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.990544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.990761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.990787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.991018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.991044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.991274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.991301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.991470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.991495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.991683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.991708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.991882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.991908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.992097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.992123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.992292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.992317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.992474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.992500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.992725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.992750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.992916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.992942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.993159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.993185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.993382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.993408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.993565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.993591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.993756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.993789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.993978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.994004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.994195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.994220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.994405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.994430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.702 qpair failed and we were unable to recover it. 00:34:48.702 [2024-07-14 09:44:32.994601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.702 [2024-07-14 09:44:32.994626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:32.994814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:32.994839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:32.995054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:32.995080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:32.995276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:32.995303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:32.995465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:32.995491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:32.995655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:32.995680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:32.995863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:32.995896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:32.996051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:32.996076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:32.996265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:32.996291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:32.996478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:32.996503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:32.996716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:32.996741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:32.996897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:32.996932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:32.997094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:32.997121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:32.997309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:32.997334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:32.997525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:32.997551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:32.997778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:32.997804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:32.998003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:32.998029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:32.998191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:32.998216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:32.998409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:32.998435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:32.998622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:32.998648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:32.998831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:32.998857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:32.999091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:32.999120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:32.999306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:32.999336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:32.999523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:32.999549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:32.999706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:32.999731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:32.999895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:32.999921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.000073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.000099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.000293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.000318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.000506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.000532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.000723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.000749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.000913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.000940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.001094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.001119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.001309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.001335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.001550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.001576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.001731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.001756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.001952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.001979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.002145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.002171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.002356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.002381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.002566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.002593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.002776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.002802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.002992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.003018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.003198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.003224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.003410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.003435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.003615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.003640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.003794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.003820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.004015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.004041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.004228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.004253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.004436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.004461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.004637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.004663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.004854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.004888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.005102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.005127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.005292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.005318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.005499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.005525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.005715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.005741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.005915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.005942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.006109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.006145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.006332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.006358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.006521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.006549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.006707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.006733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.006925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.006951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.007143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.007169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.007359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.007386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.007584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.007610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.007774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.007799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.007989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.008014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.008217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.008242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.008431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.008459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.008622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.008649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.008876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.008904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.009104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.009139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.009323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.009349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.009535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.009560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.009723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.009752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.009931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.009958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.010177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.010203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.010390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.010416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.010582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.010608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.010802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.010827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.011058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.011084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.011282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.011308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.011501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.011527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.011723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.011749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.011943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.011969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.012135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.012161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.012352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.012377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.012534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.012560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.012778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.012803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.013007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.703 [2024-07-14 09:44:33.013034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.703 qpair failed and we were unable to recover it. 00:34:48.703 [2024-07-14 09:44:33.013187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.013212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.013365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.013395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.013577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.013602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.013787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.013812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.013967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.013994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.014208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.014234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.014451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.014477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.014672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.014699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.014870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.014897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.015083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.015109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.015265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.015291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.015482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.015509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.015677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.015704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.015871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.015898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.016058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.016084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.016281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.016307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.016491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.016516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.016698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.016724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.016887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.016919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.017080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.017105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.017321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.017347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.017514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.017539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.017734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.017760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.017950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.017976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.018132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.018157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.018351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.018377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.018565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.018590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.018751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.018776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.018966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.018993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.019186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.019212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.019404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.019430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.019619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.019644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.019876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.019903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.020092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.020118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.020276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.020302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.020514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.020540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.020736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.020762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.020921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.020947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.021136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.021162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.021319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.021344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.021569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.021595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.021787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.021812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.021985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.022011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.022201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.022226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.022395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.022421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.022579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.022606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.022822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.022848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.023029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.023055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.023218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.023243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.023432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.023458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.023615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.023641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.023864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.023895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.024085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.024110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.024300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.024326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.024515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.024540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.024728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.024754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.024946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.024972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.025154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.025181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.025339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.025365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.025562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.025588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.025744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.025770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.025963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.025989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.026204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.026230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.026416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.026442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.026656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.026682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.026892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.026918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.027078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.027103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.027302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.027328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.027490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.027519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.027735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.027767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.027986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.028013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.028202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.028227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.028393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.028418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.028581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.028606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.028791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.028816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.029005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.029031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.704 [2024-07-14 09:44:33.029191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.704 [2024-07-14 09:44:33.029217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.704 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.029412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.029437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.029626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.029651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.029842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.029873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.030029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.030055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.030271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.030296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.030488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.030513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.030713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.030738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.030952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.030978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.031173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.031199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.031359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.031384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.031575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.031601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.031788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.031814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.032013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.032039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.032203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.032228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.032414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.032439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.032623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.032649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.032863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.032895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.033120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.033145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.033333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.033358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.033547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.033572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.033804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.033830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.034008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.034034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.034192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.034217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.034385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.034410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.034626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.034652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.034847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.034881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.035063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.035088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.035277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.035303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.035473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.035500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.035718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.035743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.035961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.035988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.036218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.036244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.036408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.036433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.036628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.036658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.036822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.036848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.037024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.037050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.037258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.037283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.037435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.037461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.037646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.037671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.037884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.037911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.038088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.038114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.038285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.038310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.038474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.038499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.038721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.038746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.038947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.038973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.039160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.039186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.039381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.039406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.039594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.039619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.039805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.039831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.040009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.040034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.040233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.040261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.040430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.040456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.040659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.040684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.040846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.040879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.041068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.041093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.041287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.041313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.041517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.041542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.041698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.041723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.041916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.041943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.042107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.042135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.042328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.042358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.042527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.042562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.042779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.042805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.042991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.043017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.043237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.043263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.043453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.043478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.043670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.043695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.043889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.043916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.044072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.044098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.044288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.044313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.044504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.044530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.044693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.044718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.044905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.044931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.045147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.045173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.045338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.045364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.045550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.045575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.046450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.046479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.046693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.046719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.046893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.046921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.047089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.047115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.047310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.047336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.047504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.047530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.047694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.047720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.047916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.047943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.048111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.048140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.048292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.048317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.048531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.048557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.705 qpair failed and we were unable to recover it. 00:34:48.705 [2024-07-14 09:44:33.048749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.705 [2024-07-14 09:44:33.048775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.048966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.048993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.049192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.049218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.049402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.049428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.049592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.049618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.049795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.049820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.050019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.050046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.050208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.050235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.050422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.050448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.050618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.050643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.050805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.050831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.051004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.051031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.051194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.051220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.051413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.051439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.051588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.051618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.051808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.051834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.052064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.052091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.052283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.052309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.052520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.052547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.052707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.052733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.052922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.052948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.053137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.053162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.053320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.053346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.053534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.053560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.053742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.053768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.053938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.053964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.054159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.054184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.054412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.054437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.054632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.054657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.054846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.054878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.055072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.055098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.055313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.055338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.055497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.055522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.055720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.055746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.055905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.055937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.056118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.056144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.056339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.056364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.056576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.056602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.056794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.056820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.057014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.057041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.057255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.057280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.057494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.057531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.057715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.057741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.057935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.057961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.058133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.058159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.058384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.058410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.058642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.058668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.058830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.058856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.059026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.059052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.059244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.059269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.059484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.059510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.059701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.059726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.059919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.059955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.060146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.060172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.060358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.060384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.060579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.060605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.060795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.060820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.061054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.061080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.061265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.061292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.061506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.061531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.061724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.061750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.061965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.061991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.062155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.062181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.062367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.062393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.062611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.062637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.062805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.062831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.063052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.063078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.063268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.063294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.063487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.063514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.063735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.063761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.063916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.063942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.064158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.064185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.064377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.064402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.064558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.064584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.064746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.064771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.064972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.064998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.065163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.065189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.065353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.065379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.065565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.065591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.065787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.065813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.066002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.706 [2024-07-14 09:44:33.066028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.706 qpair failed and we were unable to recover it. 00:34:48.706 [2024-07-14 09:44:33.066218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.066244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.066456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.066486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.066644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.066670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.066857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.066889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.067049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.067075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.067240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.067266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.067431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.067457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.067650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.067675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.067843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.067875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.068034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.068060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.068245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.068271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.068486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.068512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.068705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.068731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.068894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.068920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.069138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.069163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.069385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.069411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.069569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.069595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.069811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.069838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.070040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.070067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.070234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.070265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.070437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.070462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.070680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.070705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.070925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.070952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.071138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.071164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.071322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.071347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.071512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.071538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.071727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.071754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.071921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.071947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.072109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.072139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.072328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.072354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.072517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.072542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.072714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.072740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.072928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.072955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.073142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.073168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.073354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.073380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.073543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.073568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.073729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.073754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.073944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.073971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.074140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.074167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.074333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.074359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.074555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.074581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.074768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.074793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.074967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.074994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.075170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.075196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.075351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.075377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.075528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.075553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.075721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.075746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.075940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.075967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.076140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.076166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.076337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.076363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.076547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.076573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.076724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.076750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.076936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.076963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.077131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.077156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.077322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.077348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.077521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.077548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.077717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.077742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.077926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.077953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.078111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.078136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.078304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.078330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.078518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.078544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.078711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.078736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.078897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.078924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.079117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.079143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.079331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.079356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.079547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.079574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.079736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.079762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.079957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.079983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.080139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.080166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.080383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.080413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.080599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.080625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.080787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.080813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.080994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.081020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.081213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.081238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.081453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.081478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.081654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.081678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.081880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.081907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.082063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.082089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.082250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.082275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.082490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.082516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.082674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.082699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.082885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.082912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.083068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.083094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.083284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.083309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.083507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.083532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.083722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.083747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.083933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.083959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.084119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.084145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.084377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.707 [2024-07-14 09:44:33.084402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.707 qpair failed and we were unable to recover it. 00:34:48.707 [2024-07-14 09:44:33.084594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.084619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.084785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.084811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.085004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.085031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.085213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.085239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.085455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.085480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.085639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.085664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.085880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.085907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.086098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.086123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.086294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.086320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.086508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.086534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.086747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.086772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.086968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.086994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.087180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.087206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.087466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.087492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.087681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.087707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.087873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.087899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.088089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.088115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.088272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.088298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.088499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.088525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.088717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.088743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.088939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.088964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.089174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.089218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.089416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.089444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.089645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.089672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.089870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.089898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.090096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.090134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.090331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.090357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.090548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.090576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.090768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.090795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.091014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.091042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.091232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.091260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.091489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.091516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.091710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.091737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.091933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.091960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.092151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.092183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.092413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.092440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.092597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.092624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.092787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.092814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.093011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.093038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.093237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.093264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.093465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.093493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.093728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.093755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.093948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.093975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.094206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.094233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.094396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.094422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.094607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.094633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.094797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.094824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.095006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.095034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.095268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.095295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.095493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.095520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.095699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.095726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.095899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.095936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.096125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.096157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.096374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.096400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.096586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.096612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.096816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.096844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.097041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.097068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.097275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.097303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.097462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.097488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.097714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.097741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.097935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.097962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.098132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.098159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.098327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.098354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.098554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.098581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.098771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.098798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.098993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.099020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.099253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.099279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.099471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.099497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.099658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.099686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.099878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.099905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.100071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.100097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.100299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.100326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.100516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.100543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.100762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.100789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.100983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.101010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.101238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.101265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.101456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.101482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.101703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.101729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.101913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.101948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.102154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.102181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.708 qpair failed and we were unable to recover it. 00:34:48.708 [2024-07-14 09:44:33.102397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.708 [2024-07-14 09:44:33.102423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.102610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.102637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.102791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.102818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.103000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.103027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.103221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.103247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.103445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.103472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.103667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.103693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.103864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.103902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.104138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.104166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.104355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.104382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.104565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.104592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.104790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.104816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.105016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.105043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.105244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.105270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.105469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.105495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.105661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.105689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.105855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.105889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.106118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.106154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.106355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.106381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.106580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.106608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.106796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.106823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.107030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.107062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.107266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.107292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.107487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.107514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.107699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.107725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.107881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.107908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.108069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.108095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.108296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.108323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.108512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.108538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.108702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.108729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.108923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.108950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.109150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.109177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.109374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.109400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.109564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.109592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.109817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.109844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.110098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.110132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.110323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.110350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.110535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.110561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.110721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.110749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.110925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.110953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.111142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.111168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.111356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.111382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.111567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.111593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.111786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.111813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.112031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.112060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.112237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.112264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.112432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.112459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.112661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.112687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.112857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.112889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.113119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.113146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.113339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.113366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.113530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.113557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.113747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.113773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.113936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.113963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.114178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.114205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.114402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.114429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.114617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.114644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.114836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.114863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.115034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.115060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.115269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.115297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.115495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.115522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.115722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.115753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.115924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.115952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.116153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.116181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.116371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.116398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.116613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.116639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.116853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.116894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.117095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.117122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.117342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.117369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.117586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.117612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.117824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.117850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.118075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.118102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.118290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.118317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.118507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.118534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.118726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.118752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.118952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.118979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.119193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.119219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.119379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.119406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.119563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.119589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.119819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.119845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.120042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.120069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.120264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.120291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.120506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.120532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.120695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.120722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.120938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.120966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.121134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.121160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.709 qpair failed and we were unable to recover it. 00:34:48.709 [2024-07-14 09:44:33.121351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.709 [2024-07-14 09:44:33.121379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.710 qpair failed and we were unable to recover it. 00:34:48.710 [2024-07-14 09:44:33.121575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.710 [2024-07-14 09:44:33.121602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.710 qpair failed and we were unable to recover it. 00:34:48.710 [2024-07-14 09:44:33.121768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.710 [2024-07-14 09:44:33.121809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.710 qpair failed and we were unable to recover it. 00:34:48.710 [2024-07-14 09:44:33.122041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.710 [2024-07-14 09:44:33.122069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.710 qpair failed and we were unable to recover it. 00:34:48.710 [2024-07-14 09:44:33.122268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.710 [2024-07-14 09:44:33.122294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.710 qpair failed and we were unable to recover it. 00:34:48.710 [2024-07-14 09:44:33.122517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.710 [2024-07-14 09:44:33.122545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.710 qpair failed and we were unable to recover it. 00:34:48.710 [2024-07-14 09:44:33.122765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.710 [2024-07-14 09:44:33.122792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.710 qpair failed and we were unable to recover it. 00:34:48.710 [2024-07-14 09:44:33.122950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.710 [2024-07-14 09:44:33.122978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.710 qpair failed and we were unable to recover it. 00:34:48.710 [2024-07-14 09:44:33.123172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.710 [2024-07-14 09:44:33.123198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.710 qpair failed and we were unable to recover it. 00:34:48.710 [2024-07-14 09:44:33.123386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.710 [2024-07-14 09:44:33.123414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.710 qpair failed and we were unable to recover it. 00:34:48.710 [2024-07-14 09:44:33.123756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.710 [2024-07-14 09:44:33.123783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.710 qpair failed and we were unable to recover it. 00:34:48.710 [2024-07-14 09:44:33.123956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.710 [2024-07-14 09:44:33.123984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.710 qpair failed and we were unable to recover it. 00:34:48.710 [2024-07-14 09:44:33.124153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.710 [2024-07-14 09:44:33.124180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.710 qpair failed and we were unable to recover it. 00:34:48.710 [2024-07-14 09:44:33.124373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.710 [2024-07-14 09:44:33.124399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.710 qpair failed and we were unable to recover it. 00:34:48.710 [2024-07-14 09:44:33.124565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.710 [2024-07-14 09:44:33.124591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.710 qpair failed and we were unable to recover it. 00:34:48.710 [2024-07-14 09:44:33.124755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.710 [2024-07-14 09:44:33.124808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.710 qpair failed and we were unable to recover it. 00:34:48.710 [2024-07-14 09:44:33.124981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.710 [2024-07-14 09:44:33.125009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.710 qpair failed and we were unable to recover it. 00:34:48.710 [2024-07-14 09:44:33.125184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.710 [2024-07-14 09:44:33.125211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.710 qpair failed and we were unable to recover it. 00:34:48.710 [2024-07-14 09:44:33.125394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.710 [2024-07-14 09:44:33.125421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.710 qpair failed and we were unable to recover it. 00:34:48.710 [2024-07-14 09:44:33.125635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.710 [2024-07-14 09:44:33.125662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.710 qpair failed and we were unable to recover it. 00:34:48.710 [2024-07-14 09:44:33.125816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.710 [2024-07-14 09:44:33.125843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.710 qpair failed and we were unable to recover it. 00:34:48.710 [2024-07-14 09:44:33.125897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10045b0 (9): Bad file descriptor 00:34:48.710 [2024-07-14 09:44:33.126134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.710 [2024-07-14 09:44:33.126183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:48.710 qpair failed and we were unable to recover it. 00:34:48.710 [2024-07-14 09:44:33.126389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.710 [2024-07-14 09:44:33.126425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:48.710 qpair failed and we were unable to recover it. 00:34:48.710 [2024-07-14 09:44:33.126636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.710 [2024-07-14 09:44:33.126669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:48.710 qpair failed and we were unable to recover it. 00:34:48.710 [2024-07-14 09:44:33.126857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.710 [2024-07-14 09:44:33.126898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:48.710 qpair failed and we were unable to recover it. 00:34:48.710 [2024-07-14 09:44:33.127079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.710 [2024-07-14 09:44:33.127111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:48.710 qpair failed and we were unable to recover it. 00:34:48.710 [2024-07-14 09:44:33.127330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.710 [2024-07-14 09:44:33.127365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:48.710 qpair failed and we were unable to recover it. 00:34:48.710 [2024-07-14 09:44:33.127582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.710 [2024-07-14 09:44:33.127618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:48.710 qpair failed and we were unable to recover it. 00:34:48.710 [2024-07-14 09:44:33.127839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.710 [2024-07-14 09:44:33.127880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:48.710 qpair failed and we were unable to recover it. 00:34:48.710 [2024-07-14 09:44:33.128093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.710 [2024-07-14 09:44:33.128126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:48.710 qpair failed and we were unable to recover it. 00:34:48.980 [2024-07-14 09:44:33.128309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.980 [2024-07-14 09:44:33.128344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:48.980 qpair failed and we were unable to recover it. 00:34:48.980 [2024-07-14 09:44:33.128559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.980 [2024-07-14 09:44:33.128592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:48.980 qpair failed and we were unable to recover it. 00:34:48.980 [2024-07-14 09:44:33.128810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.980 [2024-07-14 09:44:33.128845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:48.980 qpair failed and we were unable to recover it. 00:34:48.980 [2024-07-14 09:44:33.129074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.980 [2024-07-14 09:44:33.129114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.980 qpair failed and we were unable to recover it. 00:34:48.980 [2024-07-14 09:44:33.129285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.980 [2024-07-14 09:44:33.129313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.980 qpair failed and we were unable to recover it. 00:34:48.980 [2024-07-14 09:44:33.129511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.980 [2024-07-14 09:44:33.129537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.980 qpair failed and we were unable to recover it. 00:34:48.980 [2024-07-14 09:44:33.129709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.980 [2024-07-14 09:44:33.129738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.980 qpair failed and we were unable to recover it. 00:34:48.980 [2024-07-14 09:44:33.129929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.980 [2024-07-14 09:44:33.129957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.980 qpair failed and we were unable to recover it. 00:34:48.980 [2024-07-14 09:44:33.130114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.980 [2024-07-14 09:44:33.130140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.980 qpair failed and we were unable to recover it. 00:34:48.980 [2024-07-14 09:44:33.130308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.980 [2024-07-14 09:44:33.130334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.980 qpair failed and we were unable to recover it. 00:34:48.980 [2024-07-14 09:44:33.130524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.980 [2024-07-14 09:44:33.130550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.980 qpair failed and we were unable to recover it. 00:34:48.980 [2024-07-14 09:44:33.130717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.980 [2024-07-14 09:44:33.130749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.980 qpair failed and we were unable to recover it. 00:34:48.980 [2024-07-14 09:44:33.130943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.980 [2024-07-14 09:44:33.130970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.980 qpair failed and we were unable to recover it. 00:34:48.980 [2024-07-14 09:44:33.131139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.980 [2024-07-14 09:44:33.131166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.980 qpair failed and we were unable to recover it. 00:34:48.980 [2024-07-14 09:44:33.131351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.980 [2024-07-14 09:44:33.131377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.980 qpair failed and we were unable to recover it. 00:34:48.980 [2024-07-14 09:44:33.131543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.980 [2024-07-14 09:44:33.131571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.980 qpair failed and we were unable to recover it. 00:34:48.980 [2024-07-14 09:44:33.131784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.980 [2024-07-14 09:44:33.131811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.980 qpair failed and we were unable to recover it. 00:34:48.980 [2024-07-14 09:44:33.131979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.980 [2024-07-14 09:44:33.132007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.980 qpair failed and we were unable to recover it. 00:34:48.980 [2024-07-14 09:44:33.132194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.980 [2024-07-14 09:44:33.132221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.980 qpair failed and we were unable to recover it. 00:34:48.980 [2024-07-14 09:44:33.132413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.980 [2024-07-14 09:44:33.132439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.980 qpair failed and we were unable to recover it. 00:34:48.980 [2024-07-14 09:44:33.132610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.132636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.132825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.132851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.133077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.133104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.133330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.133356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.133571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.133600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.133828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.133855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.134050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.134077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.134302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.134329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.134513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.134539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.134696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.134723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.134927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.134955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.135160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.135187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.135379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.135405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.135572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.135598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.135769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.135796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.135983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.136018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.136218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.136245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.136433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.136460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.136631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.136658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.136840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.136872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.137104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.137131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.137350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.137376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.137545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.137572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.137771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.137798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.137962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.137990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.138173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.138199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.138405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.138431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.138599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.138626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.138817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.138844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.139048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.139075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.139264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.139290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.139458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.139489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.139682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.139708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.139879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.139910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.140132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.140159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.140319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.140346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.140516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.140544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.140750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.140776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.140984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.141011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.141208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.141234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.141450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.141476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.141666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.141693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.141885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.141913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.142141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.142166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.142357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.142383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.142576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.142603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.142780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.142805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.143004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.143031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.143226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.143252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.143444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.143470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.143696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.143722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.143894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.143922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.144088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.144115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.144350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.144377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.144570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.144596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.144812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.144838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.145078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.145105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.145317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.145343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.145511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.145538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.145697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.145723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.145887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.145914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.146119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.146145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.146336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.146362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.146554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.146580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.146768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.146795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.146956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.146983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.147182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.147208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.147404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.981 [2024-07-14 09:44:33.147430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.981 qpair failed and we were unable to recover it. 00:34:48.981 [2024-07-14 09:44:33.147625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.147651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.147844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.147878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.148056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.148082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.148267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.148293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.148470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.148497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.148690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.148717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.148908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.148935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.149128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.149155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.149345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.149371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.149629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.149656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.149878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.149906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.150077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.150103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.150320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.150346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.150562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.150588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.150750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.150777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.150973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.151001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.151170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.151196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.151475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.151515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.151689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.151715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.151901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.151930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.152128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.152154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.152349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.152377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.152565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.152592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.152781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.152807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.153025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.153052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.153246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.153272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.153446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.153472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.153637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.153664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.153850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.153882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.154050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.154077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.154264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.154295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.154491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.154517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.154686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.154712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.154877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.154904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.155069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.155095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.155342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.155368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.155539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.155581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.155815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.155841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.156053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.156081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.156249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.156275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.156437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.156465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.156670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.156696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.156942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.156970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.157139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.157166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.157365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.157392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.157556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.157582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.157742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.157768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.157960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.157997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.158188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.158216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.158443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.158469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.158634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.158661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.158828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.158854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.159060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.159086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.159266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.159292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.159445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.159471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.159634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.159661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.159891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.159925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.160093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.160120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.160310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.160336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.160505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.160532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.160724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.160750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.160947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.160974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.161166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.161193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.161379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.161405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.161591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.161618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.161811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.161837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.162033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.162059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.162250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.162278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.982 [2024-07-14 09:44:33.162437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.982 [2024-07-14 09:44:33.162463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.982 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.162626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.162653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.162844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.162882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.163107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.163134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.163347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.163374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.163594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.163620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.163811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.163837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.164041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.164070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.164261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.164287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.164462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.164488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.164678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.164705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.164925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.164953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.165152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.165178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.165372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.165398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.165619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.165645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.165810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.165838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.166014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.166041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.166234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.166261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.166454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.166480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.166672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.166699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.166898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.166927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.167126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.167155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.167352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.167378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.167574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.167601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.167760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.167787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.167974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.168002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.168192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.168219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.168384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.168411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.168603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.168630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.168792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.168819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.169018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.169045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.169206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.169233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.169401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.169427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.169617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.169644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.169805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.169832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.170028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.170056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.170242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.170268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.170458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.170484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.170666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.170692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.170887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.170915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.171108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.171135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.171298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.171325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.171525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.171556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.171779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.171805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.171999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.172027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.172213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.172239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.172432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.172459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.172609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.172636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.172804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.172831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.173038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.173066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.173261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.173287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.173486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.173513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.173682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.173709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.173938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.173966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.174122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.174149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.174334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.174361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.174566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.174592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.174758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.174785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.174980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.175008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.175203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.175230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.175402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.175429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.175628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.175654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.175875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.175906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.176078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.176106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.983 [2024-07-14 09:44:33.176297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.983 [2024-07-14 09:44:33.176324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.983 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.176523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.176551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.176739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.176765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.176966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.176993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.177205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.177232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.177430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.177457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.177639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.177666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.177831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.177857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.178059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.178087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.178276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.178302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.178499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.178525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.178737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.178763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.178993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.179021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.179217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.179243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.179436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.179462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.179655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.179682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.179874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.179905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.180108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.180134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.180301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.180331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.180491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.180517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.180733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.180774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.180964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.180990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.181209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.181234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.181429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.181457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.181696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.181723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.181918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.181945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.182134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.182160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.182397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.182424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.182618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.182644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.182845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.182878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.183052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.183078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.183290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.183331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.183582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.183608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.183818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.183846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.184093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.184120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.184326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.184353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.184538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.184564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.184824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.184850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.185025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.185053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.185322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.185366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.185565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.185593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.185826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.185851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.186028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.186055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.186295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.186320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.186542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.186568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.186779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.186806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.186975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.187003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.187189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.187215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.187416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.187443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.187661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.187689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.187893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.187936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.188121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.188148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.188340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.188366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.188558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.188584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.188795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.188821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.189055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.189082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.189253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.189280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.189474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.189500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.189686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.189717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.189908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.189935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.190131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.190159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.190380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.190406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.190597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.190623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.190834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.190860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.191061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.191089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.191274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.191300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.191476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.191504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.191682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.191709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.191873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.191905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.192138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.192165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.192325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.192351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.192576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.192617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.192824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.192851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.984 qpair failed and we were unable to recover it. 00:34:48.984 [2024-07-14 09:44:33.193069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.984 [2024-07-14 09:44:33.193097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.193277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.193304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.193509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.193535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.193725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.193752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.193945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.193972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.194172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.194199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.194421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.194448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.194647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.194673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.194872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.194898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.195068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.195095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.195283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.195309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.195472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.195499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.195694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.195721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.195964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.195992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.196150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.196177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.196350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.196376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.196570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.196597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.196797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.196824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.197032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.197059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.197273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.197299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.197516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.197543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.197737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.197763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.197955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.197990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.198207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.198234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.198411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.198436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.198664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.198708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.198916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.198943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.199148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.199175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.199413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.199453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.199682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.199708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.199898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.199927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.200120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.200147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.200403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.200429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.200590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.200616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.200859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.200904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.201176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.201202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.201387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.201414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.201636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.201661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.201874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.201901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.202091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.202119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.202373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.202400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.202593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.202620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.202782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.202809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.203004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.203031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.203249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.203275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.203468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.203494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.203717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.203744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.203907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.203935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.204138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.204164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.204366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.204391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.204614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.204640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.204831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.204857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.205074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.205101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.205317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.205344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.205516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.205542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.205716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.205741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.205967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.205994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.206191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.206217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.206422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.206448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.206620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.206646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.206995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.207040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.207255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.207280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.207486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.207514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.207680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.207706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.207984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.208021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.208187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.985 [2024-07-14 09:44:33.208218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.985 qpair failed and we were unable to recover it. 00:34:48.985 [2024-07-14 09:44:33.208413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.208440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.208669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.208694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.208887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.208920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.209148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.209175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.209351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.209376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.209605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.209631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.209822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.209849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.210054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.210081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.210275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.210301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.210492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.210519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.210705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.210731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.210924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.210951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.211118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.211144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.211372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.211398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.211597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.211624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.211786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.211813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.212014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.212042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.212206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.212233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.212430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.212458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.212671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.212697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.212894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.212921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.213122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.213149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.213335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.213361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.213539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.213566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.213752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.213779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.213970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.213996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.214188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.214215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.214406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.214433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.214617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.214644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.214800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.214827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.215021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.215048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.215233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.215260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.215448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.215475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.215670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.215696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.215885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.215914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.216148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.216174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.216374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.216400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.216594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.216622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.216840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.216880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.217078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.217109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.217300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.217327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.217510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.217536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.217753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.217780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.217940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.217967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.218185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.218210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.218422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.218448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.218617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.218644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.218854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.218887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.219082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.219108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.219268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.219295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.219486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.219512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.219706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.219732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.219947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.219975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.220195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.220221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.220440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.220467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.220685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.220711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.220876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.220903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.221096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.221123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.221378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.221403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.221589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.221616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.221837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.221863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.222086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.222112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.222336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.222362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.222592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.222632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.222842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.222887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.223052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.223079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.223301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.223328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.223492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.223520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.223736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.223762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.223959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.223987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.224155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.986 [2024-07-14 09:44:33.224196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.986 qpair failed and we were unable to recover it. 00:34:48.986 [2024-07-14 09:44:33.224422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.224448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.224663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.224690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.224900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.224928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.225097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.225123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.225299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.225325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.225526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.225553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.225768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.225795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.225959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.225986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.226155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.226186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.226353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.226380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.226570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.226597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.226769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.226795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.226989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.227016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.227219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.227245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.227465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.227491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.227698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.227726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.227917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.227945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.228140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.228166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.228409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.228435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.228617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.228642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.228839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.228871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.229057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.229084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.229281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.229308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.229511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.229537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.229690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.229715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.229912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.229953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.230176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.230202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.230424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.230465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.230652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.230678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.230891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.230918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.231116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.231142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.231330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.231358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.231540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.231567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.231757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.231784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.232018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.232046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.232252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.232278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.232475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.232501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.232718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.232744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.232936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.232964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.233125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.233153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.233373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.233399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.233586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.233611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.233805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.233831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.234033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.234059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.234261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.234286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.234480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.234507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.234724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.234749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.234947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.234973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.235206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.235236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.235393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.235419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.235643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.235669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.235889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.235916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.236105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.236132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.236354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.236396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.236600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.236627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.236793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.236833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.237015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.237043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.237237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.237263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.237478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.237504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.237716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.237742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.237957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.237998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.238198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.238225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.238393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.238420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.238610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.238636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.238854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.238901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.239098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.239125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.239346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.239373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.239564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.239591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.239840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.239877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.240077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.240104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.240416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.240442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.240673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.240698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.240910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.987 [2024-07-14 09:44:33.240938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.987 qpair failed and we were unable to recover it. 00:34:48.987 [2024-07-14 09:44:33.241127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.241154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.241355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.241381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.241556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.241583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.241751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.241778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.241967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.241995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.242166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.242192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.242390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.242417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.242666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.242692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.242854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.242887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.243076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.243102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.243277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.243303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.243496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.243521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.243739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.243765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.243988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.244016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.244237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.244264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.244456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.244487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.244691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.244717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.244912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.244940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.245133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.245160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.245363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.245388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.245551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.245576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.245792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.245819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.245992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.246018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.246207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.246233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.246428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.246455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.246674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.246700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.246863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.246895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.247086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.247112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.247312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.247337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.247556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.247583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.247743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.247769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.247936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.247964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.248157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.248184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.248352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.248377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.248616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.248641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.248850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.248895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.249063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.249091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.249313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.249339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.249534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.249560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.249723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.249750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.249940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.249967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.250160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.250186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.250383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.250409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.250604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.250630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.250798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.250825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.251099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.251127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.251354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.251380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.251572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.251599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.251793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.251819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.251991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.252019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.252215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.252242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.252437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.252463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.252658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.252684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.252842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.252875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.253061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.253087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.253362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.253408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.253619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.253660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.253898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.253924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.254138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.254179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.254405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.254431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.254626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.254653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.254853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.254899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.255120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.255147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.255338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.255364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.255569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.255594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.255860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.255897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.256093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.256120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.256316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.988 [2024-07-14 09:44:33.256342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.988 qpair failed and we were unable to recover it. 00:34:48.988 [2024-07-14 09:44:33.256536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.256563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.256817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.256844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.257066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.257110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.257310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.257345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.257561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.257596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.257831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.257875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.258101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.258134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.258342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.258376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.258648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.258697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.258933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.258968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.259180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.259212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.259405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.259436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.259680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.259715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.259935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.259969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.260211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.260251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.260478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.260505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.260718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.260746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.260940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.260967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.261167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.261195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.261385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.261412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.261576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.261602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.261797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.261823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.261990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.262018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.262182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.262209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.262428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.262454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.262649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.262676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.262876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.262904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.263094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.263126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.263353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.263379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.263576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.263602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.263820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.263846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.264076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.264103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.264296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.264322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.264516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.264543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.264733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.264759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.264952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.264980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.265197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.265223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.265424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.265450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.265640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.265666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.265854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.265888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.266059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.266086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.266282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.266309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.266528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.266554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.266748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.266774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.266984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.267010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.267198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.267224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.267410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.267436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.267629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.267654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.267876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.267917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.268084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.268111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.268303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.268330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.268547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.268573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.268784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.268810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.269028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.269056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.269251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.269283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.269479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.269505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.269718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.269744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.269957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.269985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.270142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.270169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.270386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.270412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.270572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.270600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.270787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.270813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.271004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.271031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.271219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.271245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.271458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.271484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.271655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.271681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.271878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.271907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.272097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.272123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.272299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.272325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.272523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.272550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.272740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.272766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.989 qpair failed and we were unable to recover it. 00:34:48.989 [2024-07-14 09:44:33.272977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.989 [2024-07-14 09:44:33.273005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.273192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.273219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.273409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.273435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.273601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.273627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.273840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.273873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.274070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.274097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.274288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.274314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.274511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.274537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.274733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.274759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.274978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.275005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.275225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.275251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.275438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.275464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.275686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.275712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.275909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.275936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.276105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.276131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.276325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.276353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.276545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.276572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.276758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.276784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.276972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.277000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.277161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.277188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.277403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.277429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.277621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.277648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.277837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.277863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.278057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.278088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.278281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.278307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.278500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.278526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.278720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.278746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.278913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.278941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.279154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.279180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.279401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.279427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.279609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.279635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.279830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.279857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.280062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.280089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.280278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.280304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.280493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.280519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.280707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.280733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.280913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.280941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.281150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.281176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.281502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.281527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.281746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.281772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.281978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.282005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.282169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.282195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.282388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.282414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.282639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.282680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.282884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.282910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.283100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.283127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.283324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.283350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.283514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.283541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.283830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.283856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.284050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.284077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.284309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.284350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.284551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.284594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.284803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.284829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.285031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.285059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.285250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.285277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.285489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.285531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.285734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.285760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.285978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.286005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.286163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.286189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.286385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.286411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.286604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.286630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.286817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.286844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.287046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.287074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.287276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.990 [2024-07-14 09:44:33.287306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.990 qpair failed and we were unable to recover it. 00:34:48.990 [2024-07-14 09:44:33.287511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.287538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.287756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.287782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.287998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.288025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.288201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.288226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.288428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.288455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.288660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.288685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.288908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.288935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.289122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.289149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.289325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.289352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.289580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.289607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.289799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.289825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.290025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.290053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.290218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.290246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.290512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.290539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.290759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.290785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.290958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.290986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.291193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.291219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.291449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.291474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.291652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.291679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.291839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.291880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.292066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.292093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.292283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.292308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.292497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.292523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.292728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.292755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.292984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.293010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.293211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.293238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.293425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.293452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.293649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.293674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.293930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.293957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.294181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.294207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.294422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.294448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.294612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.294640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.294959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.295001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.295231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.295257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.295453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.295479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.295651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.295677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.295870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.295897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.296092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.296119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.296316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.296342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.296565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.296595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.296793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.296819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.297023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.297050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.297294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.297321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.297491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.297518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.297704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.297731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.297918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.297945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.298145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.298171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.298346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.298371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.298596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.298623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.298816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.298842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.299074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.299116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.299343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.299369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.299585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.299611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.299795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.299821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.300042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.300069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.300253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.300279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.300484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.300510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.300711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.300737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.300927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.300954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.301143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.301183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.301360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.301386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.301600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.301626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.301819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.301845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.302036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.302063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.302248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.302275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.302457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.302483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.302718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.302744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.302931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.302959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.303124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.303150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.303403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.303429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.303625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.303651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.303875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.303902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.304063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.304089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.991 [2024-07-14 09:44:33.304278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.991 [2024-07-14 09:44:33.304305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.991 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.304494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.304520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.304708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.304734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.304922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.304949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.305140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.305186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.305362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.305389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.305615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.305646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.305839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.305876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.306060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.306086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.306248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.306274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.306462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.306489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.306748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.306775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.306972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.307000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.307195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.307221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.307409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.307436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.307602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.307628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.307825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.307851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.308058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.308085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.308251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.308278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.308492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.308518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.308685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.308711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.308896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.308923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.309116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.309142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.309357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.309384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.309569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.309595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.309855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.309888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.310089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.310115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.310334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.310361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.310580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.310607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.310794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.310820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.311013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.311040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.311230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.311257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.311472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.311498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.311693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.311720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.311889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.311917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.312104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.312131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.312320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.312347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.312566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.312593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.312784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.312811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.313030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.313058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.313278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.313304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.313492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.313519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.313739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.313766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.313958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.313985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.314178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.314204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.314419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.314446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.314635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.314666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.314888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.314917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.315142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.315169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.315361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.315387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.315553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.315581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.315768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.315794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.315960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.315987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.316180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.316207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.316398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.316424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.316618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.316645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.316846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.316881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.317103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.317130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.317321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.317347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.317537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.317564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.317765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.317791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.317981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.318009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.318229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.318256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.318442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.318468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.318662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.318688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.318909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.318936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.319130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.319157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.319323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.319349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.319546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.319573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.319761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.319787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.320003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.320030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.320195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.320221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.320388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.320415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.320585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.320611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.320799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.320826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.321048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.321074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.321229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.321255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.321473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.321500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.992 qpair failed and we were unable to recover it. 00:34:48.992 [2024-07-14 09:44:33.321719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.992 [2024-07-14 09:44:33.321745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.321946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.321974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.322162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.322188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.322403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.322429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.322624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.322650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.322834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.322860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.323047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.323075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.323288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.323314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.323504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.323534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.323761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.323787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.323982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.324009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.324227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.324253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.324440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.324466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.324669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.324695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.324892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.324919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.325111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.325137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.325289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.325315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.325506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.325532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.325688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.325714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.325931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.325959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.326173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.326199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.326386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.326412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.326576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.326602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.326792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.326819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.327021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.327049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.327272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.327298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.327513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.327540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.327735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.327763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.327988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.328015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.328233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.328259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.328431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.328457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.328621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.328647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.328839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.328872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.329070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.329096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.329282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.329308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.329477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.329504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.329720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.329746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.329907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.329933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.330149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.330176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.330361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.330387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.330558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.330584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.330800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.330826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.330997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.331025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.331239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.331265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.331431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.331457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.331644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.331670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.331858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.331900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.332065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.332092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.332284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.332315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.332483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.332509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.332731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.332757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.332992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.333019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.333239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.333265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.333427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.333454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.333679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.333705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.333902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.333929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.334116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.334142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.334360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.334387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.334599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.334625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.334784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.334809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.334999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.335026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.335190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.335216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.335411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.335439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.335629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.335655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.335846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.335878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.336100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.336127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.336294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.336319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.336520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.336546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.336760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.336786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.336979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.337005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.337190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.337217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.337407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.337433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.337646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.337672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.337837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.337872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.338075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.338102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.338297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.338323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.338526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.338554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.338749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.338775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.338973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.339005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.339200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.339227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.339417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.339444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.339636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.339662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.339852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.339897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.340114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.340141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.340304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.340331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.340522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.340549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.340731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.340757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.340943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.993 [2024-07-14 09:44:33.340970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.993 qpair failed and we were unable to recover it. 00:34:48.993 [2024-07-14 09:44:33.341163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.341194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.341386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.341413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.341606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.341633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.341838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.341863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.342110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.342137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.342407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.342433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.342674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.342701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.342920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.342948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.343114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.343142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.343358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.343385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.343572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.343598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.343767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.343793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.344050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.344077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.344257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.344283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.344443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.344469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.344685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.344711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.344921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.344949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.345148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.345175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.345364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.345391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.345616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.345643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.345827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.345853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.346079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.346105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.346329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.346356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.346577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.346603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.346789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.346815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.347076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.347104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.347304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.347331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.347506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.347532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.347732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.347758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.347950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.347977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.348194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.348219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.348435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.348461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.348694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.348720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.348914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.348941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.349131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.349157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.349354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.349381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.349580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.349606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.349805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.349831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.350033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.350059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.350266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.350293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.350491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.350523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.350716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.350742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.350925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.350953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.351178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.351204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.351400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.351427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.351654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.351680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.351849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.351882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.352081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.352108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.352343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.352384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.352618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.352645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.352824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.352872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.353058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.353084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.353279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.353306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.353520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.353546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.353750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.353776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.353994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.354020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.354210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.354237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.354422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.354448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.354656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.354682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.354878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.354907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.355101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.355129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.355355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.355381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.355601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.355627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.355813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.355841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.356084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.356110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.356316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.356344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.356529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.356555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.356765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.356791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.356982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.357008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.357197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.357224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.357393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.357419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.357615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.357641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.357877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.357903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.358101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.358128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.358329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.358355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.358558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.358583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.358814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.358841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.359052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.359080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.359258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.359284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.994 [2024-07-14 09:44:33.359459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.994 [2024-07-14 09:44:33.359486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.994 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.359645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.359677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.359995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.360022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.360250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.360276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.360543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.360569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.360793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.360833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.361038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.361065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.361256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.361283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.361461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.361487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.361682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.361709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.361894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.361920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.362110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.362141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.362305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.362333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 898166 Killed "${NVMF_APP[@]}" "$@" 00:34:48.995 [2024-07-14 09:44:33.362556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.362583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.362797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 09:44:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:34:48.995 [2024-07-14 09:44:33.362830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 09:44:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:48.995 [2024-07-14 09:44:33.363010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.363037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 09:44:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:48.995 09:44:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:48.995 [2024-07-14 09:44:33.363257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.363283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 09:44:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:48.995 [2024-07-14 09:44:33.363453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.363479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.363695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.363722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.363908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.363936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.364127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.364153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.364370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.364397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.364617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.364643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.364859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.364892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.365058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.365084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.365283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.365310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.365507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.365533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.365693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.365719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.365915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.365943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.366108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.366136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.366300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.366326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.366577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.366602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.366825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.366874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.367138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.367164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.367347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.367373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.367548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.367574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.367761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.367788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.368023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.368051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 09:44:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=898717 00:34:48.995 09:44:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:48.995 [2024-07-14 09:44:33.368242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.368270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 09:44:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 898717 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.368521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 09:44:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 898717 ']' 00:34:48.995 [2024-07-14 09:44:33.368548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 09:44:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:48.995 [2024-07-14 09:44:33.368764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.368792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 09:44:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:48.995 [2024-07-14 09:44:33.368957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.368985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 09:44:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:48.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:48.995 [2024-07-14 09:44:33.369177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 09:44:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:48.995 [2024-07-14 09:44:33.369204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 09:44:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:48.995 [2024-07-14 09:44:33.369392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.369418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.369615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.369642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.369871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.369899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.370114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.370140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.370292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.370320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.370514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.370543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.370741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.370768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.370994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.371023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.371191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.371218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.371411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.371438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.371632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.371659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.371892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.371920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.372106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.372133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.372354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.372381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.372598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.372624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.372840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.372873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.373077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.373104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.373263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.373290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.373478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.373508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.373697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.373724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.373909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.373936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.374126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.374153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.374339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.374366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.374556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.374583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.374768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.374795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.375002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.375031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.375230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.375257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.375479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.375506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.375696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.375723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.375939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.375967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.376162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.995 [2024-07-14 09:44:33.376189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.995 qpair failed and we were unable to recover it. 00:34:48.995 [2024-07-14 09:44:33.376389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.376416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.376617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.376644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.376858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.376893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.377086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.377113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.377276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.377302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.377486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.377513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.377677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.377704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.377900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.377927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.378114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.378141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.378305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.378333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.378519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.378545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.378710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.378737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.378931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.378959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.379119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.379146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.379347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.379374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.379560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.379586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.379803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.379829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.380028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.380055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.380244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.380270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.380463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.380490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.380653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.380680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.380876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.380903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.381122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.381149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.381341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.381368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.381586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.381613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.381803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.381829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.382028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.382057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.382216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.382248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.382412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.382439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.382603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.382630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.382847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.382885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.383095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.383122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.383308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.383334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.383502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.383528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.383722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.383750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.383941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.383968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.384159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.384185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.384376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.384404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.384595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.384621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.384813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.384840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.385038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.385065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.385262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.385288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.385453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.385479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.385696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.385723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.385880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.385907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.386083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.386109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.386298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.386325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.386515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.386542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.386736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.386762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.386982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.387010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.387181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.387208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.387424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.387451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.387612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.387639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.387853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.387888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.388063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.388091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.388281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.388308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.388497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.388524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.388692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.388719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.388879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.388906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.389097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.389123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.389290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.389317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.389506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.389533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.389722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.389748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.389917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.389944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.390112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.390138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.390361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.390387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.390603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.390629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.390842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.390882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.391058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.391085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.391277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.391304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.391493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.391520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.391758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.391785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.391946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.391973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.392143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.392170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.392361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.392387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.392569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.392596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.392792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.392819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.393047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.996 [2024-07-14 09:44:33.393074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.996 qpair failed and we were unable to recover it. 00:34:48.996 [2024-07-14 09:44:33.393262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.393288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.393544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.393569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.393745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.393772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.393982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.394010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.394194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.394221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.394463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.394489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.394705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.394730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.394962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.394990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.395180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.395207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.395381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.395407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.395606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.395633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.395800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.395826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.396029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.396056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.396246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.396272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.396459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.396487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.396743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.396769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.396938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.396966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.397182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.397208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.397419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.397445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.397637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.397664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.397879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.397906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.398096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.398122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.398335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.398362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.398532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.398558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.398743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.398769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.398977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.399005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.399201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.399227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.399411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.399436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.399632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.399658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.399846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.399893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.400092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.400120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.400278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.400320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.400552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.400578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.400766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.400792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.401056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.401083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.401243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.401270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.401479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.401505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.401727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.401754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.401947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.401974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.402171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.402197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.402362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.402389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.402576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.402602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.402815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.402857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.403129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.403155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.403380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.403406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.403595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.403622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.403858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.403901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.404122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.404149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.404325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.404350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.404548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.404574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.404788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.404814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.405020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.405048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.405213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.405239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.405433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.405460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.405677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.405704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.405885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.405912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.406118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.406145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.406374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.406400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.406603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.406631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.406846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.406889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.407073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.407099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.407308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.407335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.407526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.407553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.407717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.407744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.407961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.407988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.408160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.408187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.408452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.408478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.408658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.408686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.408880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.408906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.409073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.409107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.409301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.409327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.409668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.409693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.409974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.410001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.410226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.410253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.410468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.410494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.410686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.410712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.410925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.410953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.411123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.411164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.411324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.411349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.411563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.411589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.411801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.997 [2024-07-14 09:44:33.411827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.997 qpair failed and we were unable to recover it. 00:34:48.997 [2024-07-14 09:44:33.412027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.998 [2024-07-14 09:44:33.412054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.998 qpair failed and we were unable to recover it. 00:34:48.998 [2024-07-14 09:44:33.412311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.998 [2024-07-14 09:44:33.412337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.998 qpair failed and we were unable to recover it. 00:34:48.998 [2024-07-14 09:44:33.412540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.998 [2024-07-14 09:44:33.412567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.998 qpair failed and we were unable to recover it. 00:34:48.998 [2024-07-14 09:44:33.412800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.998 [2024-07-14 09:44:33.412825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.998 qpair failed and we were unable to recover it. 00:34:48.998 [2024-07-14 09:44:33.412987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.998 [2024-07-14 09:44:33.413014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.998 qpair failed and we were unable to recover it. 00:34:48.998 [2024-07-14 09:44:33.413253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.998 [2024-07-14 09:44:33.413280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.998 qpair failed and we were unable to recover it. 00:34:48.998 [2024-07-14 09:44:33.413468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.998 [2024-07-14 09:44:33.413495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.998 qpair failed and we were unable to recover it. 00:34:48.998 [2024-07-14 09:44:33.413760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.998 [2024-07-14 09:44:33.413786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.998 qpair failed and we were unable to recover it. 00:34:48.998 [2024-07-14 09:44:33.413960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.998 [2024-07-14 09:44:33.413988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.998 qpair failed and we were unable to recover it. 00:34:48.998 [2024-07-14 09:44:33.414178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.998 [2024-07-14 09:44:33.414204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.998 qpair failed and we were unable to recover it. 00:34:48.998 [2024-07-14 09:44:33.414458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.998 [2024-07-14 09:44:33.414485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.998 qpair failed and we were unable to recover it. 00:34:48.998 [2024-07-14 09:44:33.414679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.998 [2024-07-14 09:44:33.414706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.998 qpair failed and we were unable to recover it. 00:34:48.998 [2024-07-14 09:44:33.414909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.998 [2024-07-14 09:44:33.414952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.998 qpair failed and we were unable to recover it. 00:34:48.998 [2024-07-14 09:44:33.415168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.998 [2024-07-14 09:44:33.415195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.998 qpair failed and we were unable to recover it. 00:34:48.998 [2024-07-14 09:44:33.415383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.998 [2024-07-14 09:44:33.415409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.998 qpair failed and we were unable to recover it. 00:34:48.998 [2024-07-14 09:44:33.415603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.998 [2024-07-14 09:44:33.415630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.998 qpair failed and we were unable to recover it. 00:34:48.998 [2024-07-14 09:44:33.415893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.998 [2024-07-14 09:44:33.415920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.998 qpair failed and we were unable to recover it. 00:34:48.998 [2024-07-14 09:44:33.416109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.998 [2024-07-14 09:44:33.416136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.998 qpair failed and we were unable to recover it. 00:34:48.998 [2024-07-14 09:44:33.416329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.998 [2024-07-14 09:44:33.416356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.998 qpair failed and we were unable to recover it. 00:34:48.998 [2024-07-14 09:44:33.416527] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:34:48.998 [2024-07-14 09:44:33.416582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.998 [2024-07-14 09:44:33.416614] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:48.998 [2024-07-14 09:44:33.416624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.998 qpair failed and we were unable to recover it. 00:34:48.998 [2024-07-14 09:44:33.416819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.998 [2024-07-14 09:44:33.416845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.998 qpair failed and we were unable to recover it. 00:34:48.998 [2024-07-14 09:44:33.417064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.998 [2024-07-14 09:44:33.417091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.998 qpair failed and we were unable to recover it. 00:34:48.998 [2024-07-14 09:44:33.417257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.998 [2024-07-14 09:44:33.417285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.998 qpair failed and we were unable to recover it. 00:34:48.998 [2024-07-14 09:44:33.417475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.998 [2024-07-14 09:44:33.417501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.998 qpair failed and we were unable to recover it. 00:34:48.998 [2024-07-14 09:44:33.417668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.998 [2024-07-14 09:44:33.417694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.998 qpair failed and we were unable to recover it. 00:34:48.998 [2024-07-14 09:44:33.417879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.998 [2024-07-14 09:44:33.417906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.998 qpair failed and we were unable to recover it. 00:34:48.998 [2024-07-14 09:44:33.418099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.998 [2024-07-14 09:44:33.418126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.998 qpair failed and we were unable to recover it. 00:34:48.998 [2024-07-14 09:44:33.418316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.998 [2024-07-14 09:44:33.418343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.998 qpair failed and we were unable to recover it. 00:34:48.998 [2024-07-14 09:44:33.418579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.998 [2024-07-14 09:44:33.418605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.998 qpair failed and we were unable to recover it. 00:34:48.998 [2024-07-14 09:44:33.418823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.998 [2024-07-14 09:44:33.418850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.998 qpair failed and we were unable to recover it. 00:34:48.998 [2024-07-14 09:44:33.419085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.998 [2024-07-14 09:44:33.419113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.998 qpair failed and we were unable to recover it. 00:34:48.998 [2024-07-14 09:44:33.419277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.998 [2024-07-14 09:44:33.419305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.998 qpair failed and we were unable to recover it. 00:34:48.998 [2024-07-14 09:44:33.419525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.998 [2024-07-14 09:44:33.419553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.998 qpair failed and we were unable to recover it. 00:34:48.998 [2024-07-14 09:44:33.419749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.998 [2024-07-14 09:44:33.419776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.998 qpair failed and we were unable to recover it. 00:34:48.998 [2024-07-14 09:44:33.420003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.998 [2024-07-14 09:44:33.420031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.998 qpair failed and we were unable to recover it. 00:34:48.998 [2024-07-14 09:44:33.420223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.998 [2024-07-14 09:44:33.420249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.998 qpair failed and we were unable to recover it. 00:34:48.998 [2024-07-14 09:44:33.420441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.998 [2024-07-14 09:44:33.420468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.998 qpair failed and we were unable to recover it. 00:34:48.998 [2024-07-14 09:44:33.420690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.998 [2024-07-14 09:44:33.420717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.998 qpair failed and we were unable to recover it. 00:34:48.998 [2024-07-14 09:44:33.420912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.998 [2024-07-14 09:44:33.420939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.998 qpair failed and we were unable to recover it. 00:34:48.998 [2024-07-14 09:44:33.421156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.998 [2024-07-14 09:44:33.421182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.998 qpair failed and we were unable to recover it. 00:34:48.998 [2024-07-14 09:44:33.421342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.998 [2024-07-14 09:44:33.421373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.998 qpair failed and we were unable to recover it. 00:34:48.998 [2024-07-14 09:44:33.421559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.998 [2024-07-14 09:44:33.421586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.998 qpair failed and we were unable to recover it. 00:34:48.998 [2024-07-14 09:44:33.421779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.998 [2024-07-14 09:44:33.421806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.998 qpair failed and we were unable to recover it. 00:34:48.998 [2024-07-14 09:44:33.421973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.998 [2024-07-14 09:44:33.422002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.998 qpair failed and we were unable to recover it. 00:34:48.998 [2024-07-14 09:44:33.422217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.998 [2024-07-14 09:44:33.422244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.998 qpair failed and we were unable to recover it. 00:34:48.998 [2024-07-14 09:44:33.422434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.998 [2024-07-14 09:44:33.422461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.998 qpair failed and we were unable to recover it. 00:34:48.998 [2024-07-14 09:44:33.422650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.998 [2024-07-14 09:44:33.422677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.998 qpair failed and we were unable to recover it. 00:34:48.998 [2024-07-14 09:44:33.422894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.998 [2024-07-14 09:44:33.422923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.998 qpair failed and we were unable to recover it. 00:34:48.998 [2024-07-14 09:44:33.423121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.998 [2024-07-14 09:44:33.423148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.998 qpair failed and we were unable to recover it. 00:34:48.998 [2024-07-14 09:44:33.423313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.998 [2024-07-14 09:44:33.423339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:48.998 qpair failed and we were unable to recover it. 00:34:49.273 [2024-07-14 09:44:33.423504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.273 [2024-07-14 09:44:33.423531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.273 qpair failed and we were unable to recover it. 00:34:49.273 [2024-07-14 09:44:33.423721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.273 [2024-07-14 09:44:33.423748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.273 qpair failed and we were unable to recover it. 00:34:49.273 [2024-07-14 09:44:33.423944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.273 [2024-07-14 09:44:33.423971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.273 qpair failed and we were unable to recover it. 00:34:49.274 [2024-07-14 09:44:33.424157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.274 [2024-07-14 09:44:33.424184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.274 qpair failed and we were unable to recover it. 00:34:49.274 [2024-07-14 09:44:33.424347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.274 [2024-07-14 09:44:33.424374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.274 qpair failed and we were unable to recover it. 00:34:49.274 [2024-07-14 09:44:33.424572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.274 [2024-07-14 09:44:33.424598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.274 qpair failed and we were unable to recover it. 00:34:49.274 [2024-07-14 09:44:33.424767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.274 [2024-07-14 09:44:33.424794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.274 qpair failed and we were unable to recover it. 00:34:49.274 [2024-07-14 09:44:33.424957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.274 [2024-07-14 09:44:33.424984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.274 qpair failed and we were unable to recover it. 00:34:49.274 [2024-07-14 09:44:33.425170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.274 [2024-07-14 09:44:33.425197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.274 qpair failed and we were unable to recover it. 00:34:49.274 [2024-07-14 09:44:33.425397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.274 [2024-07-14 09:44:33.425424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.274 qpair failed and we were unable to recover it. 00:34:49.274 [2024-07-14 09:44:33.425589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.274 [2024-07-14 09:44:33.425615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.274 qpair failed and we were unable to recover it. 00:34:49.274 [2024-07-14 09:44:33.425786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.274 [2024-07-14 09:44:33.425813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.274 qpair failed and we were unable to recover it. 00:34:49.274 [2024-07-14 09:44:33.425977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.274 [2024-07-14 09:44:33.426004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.274 qpair failed and we were unable to recover it. 00:34:49.274 [2024-07-14 09:44:33.426174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.274 [2024-07-14 09:44:33.426201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.274 qpair failed and we were unable to recover it. 00:34:49.274 [2024-07-14 09:44:33.426424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.274 [2024-07-14 09:44:33.426451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.274 qpair failed and we were unable to recover it. 00:34:49.274 [2024-07-14 09:44:33.426643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.274 [2024-07-14 09:44:33.426669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.274 qpair failed and we were unable to recover it. 00:34:49.274 [2024-07-14 09:44:33.426832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.274 [2024-07-14 09:44:33.426858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.274 qpair failed and we were unable to recover it. 00:34:49.274 [2024-07-14 09:44:33.427092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.274 [2024-07-14 09:44:33.427134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.274 qpair failed and we were unable to recover it. 00:34:49.274 [2024-07-14 09:44:33.427341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.274 [2024-07-14 09:44:33.427382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.274 qpair failed and we were unable to recover it. 00:34:49.274 [2024-07-14 09:44:33.427563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.274 [2024-07-14 09:44:33.427590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.274 qpair failed and we were unable to recover it. 00:34:49.274 [2024-07-14 09:44:33.427780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.274 [2024-07-14 09:44:33.427807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.274 qpair failed and we were unable to recover it. 00:34:49.274 [2024-07-14 09:44:33.428003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.274 [2024-07-14 09:44:33.428031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.274 qpair failed and we were unable to recover it. 00:34:49.274 [2024-07-14 09:44:33.428202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.274 [2024-07-14 09:44:33.428229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.274 qpair failed and we were unable to recover it. 00:34:49.274 [2024-07-14 09:44:33.428424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.274 [2024-07-14 09:44:33.428450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.274 qpair failed and we were unable to recover it. 00:34:49.274 [2024-07-14 09:44:33.428611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.274 [2024-07-14 09:44:33.428637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.274 qpair failed and we were unable to recover it. 00:34:49.274 [2024-07-14 09:44:33.428825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.274 [2024-07-14 09:44:33.428851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.274 qpair failed and we were unable to recover it. 00:34:49.274 [2024-07-14 09:44:33.429057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.274 [2024-07-14 09:44:33.429083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.274 qpair failed and we were unable to recover it. 00:34:49.274 [2024-07-14 09:44:33.429246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.274 [2024-07-14 09:44:33.429271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.274 qpair failed and we were unable to recover it. 00:34:49.274 [2024-07-14 09:44:33.429462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.274 [2024-07-14 09:44:33.429489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.274 qpair failed and we were unable to recover it. 00:34:49.274 [2024-07-14 09:44:33.429674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.274 [2024-07-14 09:44:33.429700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.274 qpair failed and we were unable to recover it. 00:34:49.274 [2024-07-14 09:44:33.429887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.274 [2024-07-14 09:44:33.429914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.274 qpair failed and we were unable to recover it. 00:34:49.274 [2024-07-14 09:44:33.430112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.274 [2024-07-14 09:44:33.430138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.274 qpair failed and we were unable to recover it. 00:34:49.274 [2024-07-14 09:44:33.430333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.274 [2024-07-14 09:44:33.430359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.274 qpair failed and we were unable to recover it. 00:34:49.274 [2024-07-14 09:44:33.430571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.274 [2024-07-14 09:44:33.430596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.274 qpair failed and we were unable to recover it. 00:34:49.274 [2024-07-14 09:44:33.430787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.274 [2024-07-14 09:44:33.430813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.274 qpair failed and we were unable to recover it. 00:34:49.274 [2024-07-14 09:44:33.431051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.274 [2024-07-14 09:44:33.431091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.274 qpair failed and we were unable to recover it. 00:34:49.274 [2024-07-14 09:44:33.431318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.274 [2024-07-14 09:44:33.431345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.274 qpair failed and we were unable to recover it. 00:34:49.274 [2024-07-14 09:44:33.431539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.274 [2024-07-14 09:44:33.431565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.274 qpair failed and we were unable to recover it. 00:34:49.274 [2024-07-14 09:44:33.431817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.274 [2024-07-14 09:44:33.431844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.274 qpair failed and we were unable to recover it. 00:34:49.274 [2024-07-14 09:44:33.432045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.274 [2024-07-14 09:44:33.432072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.274 qpair failed and we were unable to recover it. 00:34:49.274 [2024-07-14 09:44:33.432260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.274 [2024-07-14 09:44:33.432286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.274 qpair failed and we were unable to recover it. 00:34:49.274 [2024-07-14 09:44:33.432496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.274 [2024-07-14 09:44:33.432523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.274 qpair failed and we were unable to recover it. 00:34:49.274 [2024-07-14 09:44:33.432740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.274 [2024-07-14 09:44:33.432766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.274 qpair failed and we were unable to recover it. 00:34:49.274 [2024-07-14 09:44:33.432933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.274 [2024-07-14 09:44:33.432960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.274 qpair failed and we were unable to recover it. 00:34:49.274 [2024-07-14 09:44:33.433229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.275 [2024-07-14 09:44:33.433275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.275 qpair failed and we were unable to recover it. 00:34:49.275 [2024-07-14 09:44:33.433467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.275 [2024-07-14 09:44:33.433493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.275 qpair failed and we were unable to recover it. 00:34:49.275 [2024-07-14 09:44:33.433708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.275 [2024-07-14 09:44:33.433734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.275 qpair failed and we were unable to recover it. 00:34:49.275 [2024-07-14 09:44:33.433992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.275 [2024-07-14 09:44:33.434019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.275 qpair failed and we were unable to recover it. 00:34:49.275 [2024-07-14 09:44:33.434216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.275 [2024-07-14 09:44:33.434242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.275 qpair failed and we were unable to recover it. 00:34:49.275 [2024-07-14 09:44:33.434429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.275 [2024-07-14 09:44:33.434455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.275 qpair failed and we were unable to recover it. 00:34:49.275 [2024-07-14 09:44:33.434669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.275 [2024-07-14 09:44:33.434695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.275 qpair failed and we were unable to recover it. 00:34:49.275 [2024-07-14 09:44:33.434890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.275 [2024-07-14 09:44:33.434918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.275 qpair failed and we were unable to recover it. 00:34:49.275 [2024-07-14 09:44:33.435075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.275 [2024-07-14 09:44:33.435102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.275 qpair failed and we were unable to recover it. 00:34:49.275 [2024-07-14 09:44:33.435296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.275 [2024-07-14 09:44:33.435323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.275 qpair failed and we were unable to recover it. 00:34:49.275 [2024-07-14 09:44:33.435545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.275 [2024-07-14 09:44:33.435572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.275 qpair failed and we were unable to recover it. 00:34:49.275 [2024-07-14 09:44:33.435776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.275 [2024-07-14 09:44:33.435802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.275 qpair failed and we were unable to recover it. 00:34:49.275 [2024-07-14 09:44:33.436018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.275 [2024-07-14 09:44:33.436044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.275 qpair failed and we were unable to recover it. 00:34:49.275 [2024-07-14 09:44:33.436243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.275 [2024-07-14 09:44:33.436270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.275 qpair failed and we were unable to recover it. 00:34:49.275 [2024-07-14 09:44:33.436471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.275 [2024-07-14 09:44:33.436498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.275 qpair failed and we were unable to recover it. 00:34:49.275 [2024-07-14 09:44:33.436723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.275 [2024-07-14 09:44:33.436750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.275 qpair failed and we were unable to recover it. 00:34:49.275 [2024-07-14 09:44:33.436940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.275 [2024-07-14 09:44:33.436967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.275 qpair failed and we were unable to recover it. 00:34:49.275 [2024-07-14 09:44:33.437149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.275 [2024-07-14 09:44:33.437175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.275 qpair failed and we were unable to recover it. 00:34:49.275 [2024-07-14 09:44:33.437473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.275 [2024-07-14 09:44:33.437499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.275 qpair failed and we were unable to recover it. 00:34:49.275 [2024-07-14 09:44:33.437691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.275 [2024-07-14 09:44:33.437718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.275 qpair failed and we were unable to recover it. 00:34:49.275 [2024-07-14 09:44:33.437907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.275 [2024-07-14 09:44:33.437934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.275 qpair failed and we were unable to recover it. 00:34:49.275 [2024-07-14 09:44:33.438129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.275 [2024-07-14 09:44:33.438155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.275 qpair failed and we were unable to recover it. 00:34:49.275 [2024-07-14 09:44:33.438345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.275 [2024-07-14 09:44:33.438371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.275 qpair failed and we were unable to recover it. 00:34:49.275 [2024-07-14 09:44:33.438585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.275 [2024-07-14 09:44:33.438611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.275 qpair failed and we were unable to recover it. 00:34:49.275 [2024-07-14 09:44:33.438800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.275 [2024-07-14 09:44:33.438826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.275 qpair failed and we were unable to recover it. 00:34:49.275 [2024-07-14 09:44:33.438998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.275 [2024-07-14 09:44:33.439026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.275 qpair failed and we were unable to recover it. 00:34:49.275 [2024-07-14 09:44:33.439221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.275 [2024-07-14 09:44:33.439248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.275 qpair failed and we were unable to recover it. 00:34:49.275 [2024-07-14 09:44:33.439470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.275 [2024-07-14 09:44:33.439497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.275 qpair failed and we were unable to recover it. 00:34:49.275 [2024-07-14 09:44:33.439719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.275 [2024-07-14 09:44:33.439745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.275 qpair failed and we were unable to recover it. 00:34:49.275 [2024-07-14 09:44:33.439937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.275 [2024-07-14 09:44:33.439964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.275 qpair failed and we were unable to recover it. 00:34:49.275 [2024-07-14 09:44:33.440159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.275 [2024-07-14 09:44:33.440186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.275 qpair failed and we were unable to recover it. 00:34:49.275 [2024-07-14 09:44:33.440377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.275 [2024-07-14 09:44:33.440403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.275 qpair failed and we were unable to recover it. 00:34:49.275 [2024-07-14 09:44:33.440628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.275 [2024-07-14 09:44:33.440654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.275 qpair failed and we were unable to recover it. 00:34:49.275 [2024-07-14 09:44:33.440844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.275 [2024-07-14 09:44:33.440877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.275 qpair failed and we were unable to recover it. 00:34:49.275 [2024-07-14 09:44:33.441100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.275 [2024-07-14 09:44:33.441126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.275 qpair failed and we were unable to recover it. 00:34:49.275 [2024-07-14 09:44:33.441343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.275 [2024-07-14 09:44:33.441369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.275 qpair failed and we were unable to recover it. 00:34:49.275 [2024-07-14 09:44:33.441585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.275 [2024-07-14 09:44:33.441612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.275 qpair failed and we were unable to recover it. 00:34:49.275 [2024-07-14 09:44:33.441806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.275 [2024-07-14 09:44:33.441832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.275 qpair failed and we were unable to recover it. 00:34:49.275 [2024-07-14 09:44:33.442059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.275 [2024-07-14 09:44:33.442086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.275 qpair failed and we were unable to recover it. 00:34:49.275 [2024-07-14 09:44:33.442274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.275 [2024-07-14 09:44:33.442300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.275 qpair failed and we were unable to recover it. 00:34:49.275 [2024-07-14 09:44:33.442461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.275 [2024-07-14 09:44:33.442491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.275 qpair failed and we were unable to recover it. 00:34:49.275 [2024-07-14 09:44:33.442706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.276 [2024-07-14 09:44:33.442733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.276 qpair failed and we were unable to recover it. 00:34:49.276 [2024-07-14 09:44:33.442924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.276 [2024-07-14 09:44:33.442953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.276 qpair failed and we were unable to recover it. 00:34:49.276 [2024-07-14 09:44:33.443123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.276 [2024-07-14 09:44:33.443150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.276 qpair failed and we were unable to recover it. 00:34:49.276 [2024-07-14 09:44:33.443313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.276 [2024-07-14 09:44:33.443339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.276 qpair failed and we were unable to recover it. 00:34:49.276 [2024-07-14 09:44:33.443501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.276 [2024-07-14 09:44:33.443528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.276 qpair failed and we were unable to recover it. 00:34:49.276 [2024-07-14 09:44:33.443703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.276 [2024-07-14 09:44:33.443731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.276 qpair failed and we were unable to recover it. 00:34:49.276 [2024-07-14 09:44:33.443951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.276 [2024-07-14 09:44:33.443978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.276 qpair failed and we were unable to recover it. 00:34:49.276 [2024-07-14 09:44:33.444140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.276 [2024-07-14 09:44:33.444167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.276 qpair failed and we were unable to recover it. 00:34:49.276 [2024-07-14 09:44:33.444391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.276 [2024-07-14 09:44:33.444417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.276 qpair failed and we were unable to recover it. 00:34:49.276 [2024-07-14 09:44:33.444606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.276 [2024-07-14 09:44:33.444632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.276 qpair failed and we were unable to recover it. 00:34:49.276 [2024-07-14 09:44:33.444852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.276 [2024-07-14 09:44:33.444886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.276 qpair failed and we were unable to recover it. 00:34:49.276 [2024-07-14 09:44:33.445047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.276 [2024-07-14 09:44:33.445074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.276 qpair failed and we were unable to recover it. 00:34:49.276 [2024-07-14 09:44:33.445261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.276 [2024-07-14 09:44:33.445289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.276 qpair failed and we were unable to recover it. 00:34:49.276 [2024-07-14 09:44:33.445487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.276 [2024-07-14 09:44:33.445513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.276 qpair failed and we were unable to recover it. 00:34:49.276 [2024-07-14 09:44:33.445725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.276 [2024-07-14 09:44:33.445751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.276 qpair failed and we were unable to recover it. 00:34:49.276 [2024-07-14 09:44:33.445917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.276 [2024-07-14 09:44:33.445945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.276 qpair failed and we were unable to recover it. 00:34:49.276 [2024-07-14 09:44:33.446135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.276 [2024-07-14 09:44:33.446161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.276 qpair failed and we were unable to recover it. 00:34:49.276 [2024-07-14 09:44:33.446362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.276 [2024-07-14 09:44:33.446388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.276 qpair failed and we were unable to recover it. 00:34:49.276 [2024-07-14 09:44:33.446553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.276 [2024-07-14 09:44:33.446580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.276 qpair failed and we were unable to recover it. 00:34:49.276 [2024-07-14 09:44:33.446777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.276 [2024-07-14 09:44:33.446803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.276 qpair failed and we were unable to recover it. 00:34:49.276 [2024-07-14 09:44:33.446992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.276 [2024-07-14 09:44:33.447020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.276 qpair failed and we were unable to recover it. 00:34:49.276 [2024-07-14 09:44:33.447181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.276 [2024-07-14 09:44:33.447207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.276 qpair failed and we were unable to recover it. 00:34:49.276 [2024-07-14 09:44:33.447395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.276 [2024-07-14 09:44:33.447421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.276 qpair failed and we were unable to recover it. 00:34:49.276 [2024-07-14 09:44:33.447590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.276 [2024-07-14 09:44:33.447616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.276 qpair failed and we were unable to recover it. 00:34:49.276 [2024-07-14 09:44:33.447806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.276 [2024-07-14 09:44:33.447832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.276 qpair failed and we were unable to recover it. 00:34:49.276 [2024-07-14 09:44:33.448052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.276 [2024-07-14 09:44:33.448078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.276 qpair failed and we were unable to recover it. 00:34:49.276 [2024-07-14 09:44:33.448255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.276 [2024-07-14 09:44:33.448282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.276 qpair failed and we were unable to recover it. 00:34:49.276 [2024-07-14 09:44:33.448472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.276 [2024-07-14 09:44:33.448498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.276 qpair failed and we were unable to recover it. 00:34:49.276 [2024-07-14 09:44:33.448712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.276 [2024-07-14 09:44:33.448738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.276 qpair failed and we were unable to recover it. 00:34:49.276 [2024-07-14 09:44:33.448923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.276 [2024-07-14 09:44:33.448949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.276 qpair failed and we were unable to recover it. 00:34:49.276 [2024-07-14 09:44:33.449117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.276 [2024-07-14 09:44:33.449144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.276 qpair failed and we were unable to recover it. 00:34:49.276 [2024-07-14 09:44:33.449334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.276 [2024-07-14 09:44:33.449360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.276 qpair failed and we were unable to recover it. 00:34:49.276 [2024-07-14 09:44:33.449578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.276 [2024-07-14 09:44:33.449604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.276 qpair failed and we were unable to recover it. 00:34:49.276 [2024-07-14 09:44:33.449763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.276 [2024-07-14 09:44:33.449790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.276 qpair failed and we were unable to recover it. 00:34:49.276 [2024-07-14 09:44:33.450006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.276 [2024-07-14 09:44:33.450033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.276 qpair failed and we were unable to recover it. 00:34:49.276 EAL: No free 2048 kB hugepages reported on node 1 00:34:49.276 [2024-07-14 09:44:33.450197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.276 [2024-07-14 09:44:33.450224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.276 qpair failed and we were unable to recover it. 00:34:49.276 [2024-07-14 09:44:33.450413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.276 [2024-07-14 09:44:33.450439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.276 qpair failed and we were unable to recover it. 00:34:49.276 [2024-07-14 09:44:33.450593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.276 [2024-07-14 09:44:33.450620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.276 qpair failed and we were unable to recover it. 00:34:49.276 [2024-07-14 09:44:33.450785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.276 [2024-07-14 09:44:33.450812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.276 qpair failed and we were unable to recover it. 00:34:49.276 [2024-07-14 09:44:33.451014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.276 [2024-07-14 09:44:33.451042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.276 qpair failed and we were unable to recover it. 00:34:49.276 [2024-07-14 09:44:33.451257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.276 [2024-07-14 09:44:33.451284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.276 qpair failed and we were unable to recover it. 00:34:49.277 [2024-07-14 09:44:33.451447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.277 [2024-07-14 09:44:33.451473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.277 qpair failed and we were unable to recover it. 00:34:49.277 [2024-07-14 09:44:33.451702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.277 [2024-07-14 09:44:33.451727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.277 qpair failed and we were unable to recover it. 00:34:49.277 [2024-07-14 09:44:33.451930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.277 [2024-07-14 09:44:33.451958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.277 qpair failed and we were unable to recover it. 00:34:49.277 [2024-07-14 09:44:33.452136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.277 [2024-07-14 09:44:33.452162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.277 qpair failed and we were unable to recover it. 00:34:49.277 [2024-07-14 09:44:33.452347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.277 [2024-07-14 09:44:33.452387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.277 qpair failed and we were unable to recover it. 00:34:49.277 [2024-07-14 09:44:33.452594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.277 [2024-07-14 09:44:33.452620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.277 qpair failed and we were unable to recover it. 00:34:49.277 [2024-07-14 09:44:33.452826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.277 [2024-07-14 09:44:33.452853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.277 qpair failed and we were unable to recover it. 00:34:49.277 [2024-07-14 09:44:33.453021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.277 [2024-07-14 09:44:33.453048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.277 qpair failed and we were unable to recover it. 00:34:49.277 [2024-07-14 09:44:33.453248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.277 [2024-07-14 09:44:33.453274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.277 qpair failed and we were unable to recover it. 00:34:49.277 [2024-07-14 09:44:33.453443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.277 [2024-07-14 09:44:33.453469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.277 qpair failed and we were unable to recover it. 00:34:49.277 [2024-07-14 09:44:33.453685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.277 [2024-07-14 09:44:33.453712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.277 qpair failed and we were unable to recover it. 00:34:49.277 [2024-07-14 09:44:33.453908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.277 [2024-07-14 09:44:33.453939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.277 qpair failed and we were unable to recover it. 00:34:49.277 [2024-07-14 09:44:33.454132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.277 [2024-07-14 09:44:33.454159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.277 qpair failed and we were unable to recover it. 00:34:49.277 [2024-07-14 09:44:33.454319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.277 [2024-07-14 09:44:33.454360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.277 qpair failed and we were unable to recover it. 00:34:49.277 [2024-07-14 09:44:33.454589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.277 [2024-07-14 09:44:33.454615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.277 qpair failed and we were unable to recover it. 00:34:49.277 [2024-07-14 09:44:33.454784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.277 [2024-07-14 09:44:33.454810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.277 qpair failed and we were unable to recover it. 00:34:49.277 [2024-07-14 09:44:33.455008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.277 [2024-07-14 09:44:33.455036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.277 qpair failed and we were unable to recover it. 00:34:49.277 [2024-07-14 09:44:33.455207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.277 [2024-07-14 09:44:33.455233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.277 qpair failed and we were unable to recover it. 00:34:49.277 [2024-07-14 09:44:33.455424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.277 [2024-07-14 09:44:33.455451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.277 qpair failed and we were unable to recover it. 00:34:49.277 [2024-07-14 09:44:33.455629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.277 [2024-07-14 09:44:33.455654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.277 qpair failed and we were unable to recover it. 00:34:49.277 [2024-07-14 09:44:33.455859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.277 [2024-07-14 09:44:33.455893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.277 qpair failed and we were unable to recover it. 00:34:49.277 [2024-07-14 09:44:33.456052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.277 [2024-07-14 09:44:33.456078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.277 qpair failed and we were unable to recover it. 00:34:49.277 [2024-07-14 09:44:33.456250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.277 [2024-07-14 09:44:33.456276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.277 qpair failed and we were unable to recover it. 00:34:49.277 [2024-07-14 09:44:33.456493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.277 [2024-07-14 09:44:33.456519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.277 qpair failed and we were unable to recover it. 00:34:49.277 [2024-07-14 09:44:33.456716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.277 [2024-07-14 09:44:33.456742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.277 qpair failed and we were unable to recover it. 00:34:49.277 [2024-07-14 09:44:33.456940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.277 [2024-07-14 09:44:33.456968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.277 qpair failed and we were unable to recover it. 00:34:49.277 [2024-07-14 09:44:33.457184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.277 [2024-07-14 09:44:33.457211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.277 qpair failed and we were unable to recover it. 00:34:49.277 [2024-07-14 09:44:33.457394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.277 [2024-07-14 09:44:33.457420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.277 qpair failed and we were unable to recover it. 00:34:49.277 [2024-07-14 09:44:33.457618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.277 [2024-07-14 09:44:33.457644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.277 qpair failed and we were unable to recover it. 00:34:49.277 [2024-07-14 09:44:33.457803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.277 [2024-07-14 09:44:33.457829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.277 qpair failed and we were unable to recover it. 00:34:49.277 [2024-07-14 09:44:33.458003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.277 [2024-07-14 09:44:33.458031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.277 qpair failed and we were unable to recover it. 00:34:49.277 [2024-07-14 09:44:33.458249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.277 [2024-07-14 09:44:33.458275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.277 qpair failed and we were unable to recover it. 00:34:49.277 [2024-07-14 09:44:33.458493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.277 [2024-07-14 09:44:33.458520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.277 qpair failed and we were unable to recover it. 00:34:49.277 [2024-07-14 09:44:33.458706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.277 [2024-07-14 09:44:33.458734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.277 qpair failed and we were unable to recover it. 00:34:49.277 [2024-07-14 09:44:33.458957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.277 [2024-07-14 09:44:33.458984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.277 qpair failed and we were unable to recover it. 00:34:49.277 [2024-07-14 09:44:33.459175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.277 [2024-07-14 09:44:33.459201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.277 qpair failed and we were unable to recover it. 00:34:49.277 [2024-07-14 09:44:33.459389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.277 [2024-07-14 09:44:33.459415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.277 qpair failed and we were unable to recover it. 00:34:49.277 [2024-07-14 09:44:33.459638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.277 [2024-07-14 09:44:33.459664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.277 qpair failed and we were unable to recover it. 00:34:49.277 [2024-07-14 09:44:33.459859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.277 [2024-07-14 09:44:33.459892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.277 qpair failed and we were unable to recover it. 00:34:49.277 [2024-07-14 09:44:33.460086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.277 [2024-07-14 09:44:33.460112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.277 qpair failed and we were unable to recover it. 00:34:49.277 [2024-07-14 09:44:33.460343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.277 [2024-07-14 09:44:33.460369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.277 qpair failed and we were unable to recover it. 00:34:49.278 [2024-07-14 09:44:33.460540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.278 [2024-07-14 09:44:33.460566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.278 qpair failed and we were unable to recover it. 00:34:49.278 [2024-07-14 09:44:33.460776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.278 [2024-07-14 09:44:33.460818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.278 qpair failed and we were unable to recover it. 00:34:49.278 [2024-07-14 09:44:33.461025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.278 [2024-07-14 09:44:33.461051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.278 qpair failed and we were unable to recover it. 00:34:49.278 [2024-07-14 09:44:33.461217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.278 [2024-07-14 09:44:33.461245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.278 qpair failed and we were unable to recover it. 00:34:49.278 [2024-07-14 09:44:33.461428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.278 [2024-07-14 09:44:33.461454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.278 qpair failed and we were unable to recover it. 00:34:49.278 [2024-07-14 09:44:33.461652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.278 [2024-07-14 09:44:33.461679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.278 qpair failed and we were unable to recover it. 00:34:49.278 [2024-07-14 09:44:33.461901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.278 [2024-07-14 09:44:33.461928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.278 qpair failed and we were unable to recover it. 00:34:49.278 [2024-07-14 09:44:33.462093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.278 [2024-07-14 09:44:33.462120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.278 qpair failed and we were unable to recover it. 00:34:49.278 [2024-07-14 09:44:33.462337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.278 [2024-07-14 09:44:33.462363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.278 qpair failed and we were unable to recover it. 00:34:49.278 [2024-07-14 09:44:33.462551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.278 [2024-07-14 09:44:33.462578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.278 qpair failed and we were unable to recover it. 00:34:49.278 [2024-07-14 09:44:33.462774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.278 [2024-07-14 09:44:33.462804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.278 qpair failed and we were unable to recover it. 00:34:49.278 [2024-07-14 09:44:33.463014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.278 [2024-07-14 09:44:33.463042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.278 qpair failed and we were unable to recover it. 00:34:49.278 [2024-07-14 09:44:33.463245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.278 [2024-07-14 09:44:33.463272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.278 qpair failed and we were unable to recover it. 00:34:49.278 [2024-07-14 09:44:33.463461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.278 [2024-07-14 09:44:33.463487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.278 qpair failed and we were unable to recover it. 00:34:49.278 [2024-07-14 09:44:33.463688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.278 [2024-07-14 09:44:33.463715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.278 qpair failed and we were unable to recover it. 00:34:49.278 [2024-07-14 09:44:33.463911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.278 [2024-07-14 09:44:33.463938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.278 qpair failed and we were unable to recover it. 00:34:49.278 [2024-07-14 09:44:33.464130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.278 [2024-07-14 09:44:33.464158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.278 qpair failed and we were unable to recover it. 00:34:49.278 [2024-07-14 09:44:33.464377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.278 [2024-07-14 09:44:33.464403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.278 qpair failed and we were unable to recover it. 00:34:49.278 [2024-07-14 09:44:33.464570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.278 [2024-07-14 09:44:33.464597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.278 qpair failed and we were unable to recover it. 00:34:49.278 [2024-07-14 09:44:33.464757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.278 [2024-07-14 09:44:33.464800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.278 qpair failed and we were unable to recover it. 00:34:49.278 [2024-07-14 09:44:33.465001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.278 [2024-07-14 09:44:33.465028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.278 qpair failed and we were unable to recover it. 00:34:49.278 [2024-07-14 09:44:33.465215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.278 [2024-07-14 09:44:33.465242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.278 qpair failed and we were unable to recover it. 00:34:49.278 [2024-07-14 09:44:33.465458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.278 [2024-07-14 09:44:33.465484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.278 qpair failed and we were unable to recover it. 00:34:49.278 [2024-07-14 09:44:33.465670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.278 [2024-07-14 09:44:33.465696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.278 qpair failed and we were unable to recover it. 00:34:49.278 [2024-07-14 09:44:33.465859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.278 [2024-07-14 09:44:33.465894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.278 qpair failed and we were unable to recover it. 00:34:49.278 [2024-07-14 09:44:33.466087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.278 [2024-07-14 09:44:33.466114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.278 qpair failed and we were unable to recover it. 00:34:49.278 [2024-07-14 09:44:33.466316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.278 [2024-07-14 09:44:33.466342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.278 qpair failed and we were unable to recover it. 00:34:49.278 [2024-07-14 09:44:33.466558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.278 [2024-07-14 09:44:33.466584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.278 qpair failed and we were unable to recover it. 00:34:49.278 [2024-07-14 09:44:33.466772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.278 [2024-07-14 09:44:33.466799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.278 qpair failed and we were unable to recover it. 00:34:49.278 [2024-07-14 09:44:33.467017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.278 [2024-07-14 09:44:33.467045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.278 qpair failed and we were unable to recover it. 00:34:49.278 [2024-07-14 09:44:33.467209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.278 [2024-07-14 09:44:33.467235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.278 qpair failed and we were unable to recover it. 00:34:49.278 [2024-07-14 09:44:33.467402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.278 [2024-07-14 09:44:33.467428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.278 qpair failed and we were unable to recover it. 00:34:49.278 [2024-07-14 09:44:33.467664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.278 [2024-07-14 09:44:33.467690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.278 qpair failed and we were unable to recover it. 00:34:49.278 [2024-07-14 09:44:33.467876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.278 [2024-07-14 09:44:33.467903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.278 qpair failed and we were unable to recover it. 00:34:49.279 [2024-07-14 09:44:33.468061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.279 [2024-07-14 09:44:33.468087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.279 qpair failed and we were unable to recover it. 00:34:49.279 [2024-07-14 09:44:33.468329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.279 [2024-07-14 09:44:33.468355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.279 qpair failed and we were unable to recover it. 00:34:49.279 [2024-07-14 09:44:33.468543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.279 [2024-07-14 09:44:33.468569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.279 qpair failed and we were unable to recover it. 00:34:49.279 [2024-07-14 09:44:33.468789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.279 [2024-07-14 09:44:33.468815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.279 qpair failed and we were unable to recover it. 00:34:49.279 [2024-07-14 09:44:33.468979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.279 [2024-07-14 09:44:33.469006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.279 qpair failed and we were unable to recover it. 00:34:49.279 [2024-07-14 09:44:33.469197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.279 [2024-07-14 09:44:33.469223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.279 qpair failed and we were unable to recover it. 00:34:49.279 [2024-07-14 09:44:33.469420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.279 [2024-07-14 09:44:33.469446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.279 qpair failed and we were unable to recover it. 00:34:49.279 [2024-07-14 09:44:33.469665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.279 [2024-07-14 09:44:33.469691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.279 qpair failed and we were unable to recover it. 00:34:49.279 [2024-07-14 09:44:33.469889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.279 [2024-07-14 09:44:33.469916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.279 qpair failed and we were unable to recover it. 00:34:49.279 [2024-07-14 09:44:33.470109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.279 [2024-07-14 09:44:33.470135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.279 qpair failed and we were unable to recover it. 00:34:49.279 [2024-07-14 09:44:33.470330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.279 [2024-07-14 09:44:33.470356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.279 qpair failed and we were unable to recover it. 00:34:49.279 [2024-07-14 09:44:33.470545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.279 [2024-07-14 09:44:33.470571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.279 qpair failed and we were unable to recover it. 00:34:49.279 [2024-07-14 09:44:33.470758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.279 [2024-07-14 09:44:33.470784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.279 qpair failed and we were unable to recover it. 00:34:49.279 [2024-07-14 09:44:33.470950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.279 [2024-07-14 09:44:33.470978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.279 qpair failed and we were unable to recover it. 00:34:49.279 [2024-07-14 09:44:33.471171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.279 [2024-07-14 09:44:33.471198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.279 qpair failed and we were unable to recover it. 00:34:49.279 [2024-07-14 09:44:33.471368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.279 [2024-07-14 09:44:33.471394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.279 qpair failed and we were unable to recover it. 00:34:49.279 [2024-07-14 09:44:33.471612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.279 [2024-07-14 09:44:33.471642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.279 qpair failed and we were unable to recover it. 00:34:49.279 [2024-07-14 09:44:33.471833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.279 [2024-07-14 09:44:33.471859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.279 qpair failed and we were unable to recover it. 00:34:49.279 [2024-07-14 09:44:33.472091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.279 [2024-07-14 09:44:33.472117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.279 qpair failed and we were unable to recover it. 00:34:49.279 [2024-07-14 09:44:33.472311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.279 [2024-07-14 09:44:33.472337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.279 qpair failed and we were unable to recover it. 00:34:49.279 [2024-07-14 09:44:33.472554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.279 [2024-07-14 09:44:33.472580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.279 qpair failed and we were unable to recover it. 00:34:49.279 [2024-07-14 09:44:33.472771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.279 [2024-07-14 09:44:33.472797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.279 qpair failed and we were unable to recover it. 00:34:49.279 [2024-07-14 09:44:33.472958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.279 [2024-07-14 09:44:33.472985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.279 qpair failed and we were unable to recover it. 00:34:49.279 [2024-07-14 09:44:33.473152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.279 [2024-07-14 09:44:33.473179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.279 qpair failed and we were unable to recover it. 00:34:49.279 [2024-07-14 09:44:33.473404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.279 [2024-07-14 09:44:33.473430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.279 qpair failed and we were unable to recover it. 00:34:49.279 [2024-07-14 09:44:33.473649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.279 [2024-07-14 09:44:33.473676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.279 qpair failed and we were unable to recover it. 00:34:49.279 [2024-07-14 09:44:33.473873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.279 [2024-07-14 09:44:33.473899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.279 qpair failed and we were unable to recover it. 00:34:49.279 [2024-07-14 09:44:33.474089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.279 [2024-07-14 09:44:33.474116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.279 qpair failed and we were unable to recover it. 00:34:49.279 [2024-07-14 09:44:33.474308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.279 [2024-07-14 09:44:33.474334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.279 qpair failed and we were unable to recover it. 00:34:49.279 [2024-07-14 09:44:33.474531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.279 [2024-07-14 09:44:33.474558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.279 qpair failed and we were unable to recover it. 00:34:49.279 [2024-07-14 09:44:33.474722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.279 [2024-07-14 09:44:33.474748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.279 qpair failed and we were unable to recover it. 00:34:49.279 [2024-07-14 09:44:33.474943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.279 [2024-07-14 09:44:33.474970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.279 qpair failed and we were unable to recover it. 00:34:49.279 [2024-07-14 09:44:33.475188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.279 [2024-07-14 09:44:33.475215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.279 qpair failed and we were unable to recover it. 00:34:49.279 [2024-07-14 09:44:33.475410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.279 [2024-07-14 09:44:33.475436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.279 qpair failed and we were unable to recover it. 00:34:49.279 [2024-07-14 09:44:33.475599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.279 [2024-07-14 09:44:33.475626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.279 qpair failed and we were unable to recover it. 00:34:49.279 [2024-07-14 09:44:33.475806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.279 [2024-07-14 09:44:33.475832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.279 qpair failed and we were unable to recover it. 00:34:49.279 [2024-07-14 09:44:33.476046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.279 [2024-07-14 09:44:33.476074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.279 qpair failed and we were unable to recover it. 00:34:49.279 [2024-07-14 09:44:33.476241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.279 [2024-07-14 09:44:33.476267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.279 qpair failed and we were unable to recover it. 00:34:49.279 [2024-07-14 09:44:33.476455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.279 [2024-07-14 09:44:33.476481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.279 qpair failed and we were unable to recover it. 00:34:49.279 [2024-07-14 09:44:33.476704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.279 [2024-07-14 09:44:33.476730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.279 qpair failed and we were unable to recover it. 00:34:49.279 [2024-07-14 09:44:33.476940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.279 [2024-07-14 09:44:33.476967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.279 qpair failed and we were unable to recover it. 00:34:49.280 [2024-07-14 09:44:33.477159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.280 [2024-07-14 09:44:33.477185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.280 qpair failed and we were unable to recover it. 00:34:49.280 [2024-07-14 09:44:33.477354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.280 [2024-07-14 09:44:33.477380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.280 qpair failed and we were unable to recover it. 00:34:49.280 [2024-07-14 09:44:33.477655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.280 [2024-07-14 09:44:33.477682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.280 qpair failed and we were unable to recover it. 00:34:49.280 [2024-07-14 09:44:33.477898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.280 [2024-07-14 09:44:33.477925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.280 qpair failed and we were unable to recover it. 00:34:49.280 [2024-07-14 09:44:33.478084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.280 [2024-07-14 09:44:33.478111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.280 qpair failed and we were unable to recover it. 00:34:49.280 [2024-07-14 09:44:33.478329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.280 [2024-07-14 09:44:33.478356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.280 qpair failed and we were unable to recover it. 00:34:49.280 [2024-07-14 09:44:33.478545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.280 [2024-07-14 09:44:33.478571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.280 qpair failed and we were unable to recover it. 00:34:49.280 [2024-07-14 09:44:33.478764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.280 [2024-07-14 09:44:33.478790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.280 qpair failed and we were unable to recover it. 00:34:49.280 [2024-07-14 09:44:33.478981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.280 [2024-07-14 09:44:33.479009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.280 qpair failed and we were unable to recover it. 00:34:49.280 [2024-07-14 09:44:33.479194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.280 [2024-07-14 09:44:33.479221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.280 qpair failed and we were unable to recover it. 00:34:49.280 [2024-07-14 09:44:33.479410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.280 [2024-07-14 09:44:33.479437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.280 qpair failed and we were unable to recover it. 00:34:49.280 [2024-07-14 09:44:33.479628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.280 [2024-07-14 09:44:33.479654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.280 qpair failed and we were unable to recover it. 00:34:49.280 [2024-07-14 09:44:33.479844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.280 [2024-07-14 09:44:33.479876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.280 qpair failed and we were unable to recover it. 00:34:49.280 [2024-07-14 09:44:33.480060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.280 [2024-07-14 09:44:33.480086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.280 qpair failed and we were unable to recover it. 00:34:49.280 [2024-07-14 09:44:33.480275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.280 [2024-07-14 09:44:33.480300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.280 qpair failed and we were unable to recover it. 00:34:49.280 [2024-07-14 09:44:33.480487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.280 [2024-07-14 09:44:33.480514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.280 qpair failed and we were unable to recover it. 00:34:49.280 [2024-07-14 09:44:33.480709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.280 [2024-07-14 09:44:33.480737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.280 qpair failed and we were unable to recover it. 00:34:49.280 [2024-07-14 09:44:33.480932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.280 [2024-07-14 09:44:33.480959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.280 qpair failed and we were unable to recover it. 00:34:49.280 [2024-07-14 09:44:33.481148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.280 [2024-07-14 09:44:33.481175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.280 qpair failed and we were unable to recover it. 00:34:49.280 [2024-07-14 09:44:33.481358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.280 [2024-07-14 09:44:33.481385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.280 qpair failed and we were unable to recover it. 00:34:49.280 [2024-07-14 09:44:33.481574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.280 [2024-07-14 09:44:33.481600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.280 qpair failed and we were unable to recover it. 00:34:49.280 [2024-07-14 09:44:33.481791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.280 [2024-07-14 09:44:33.481819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.280 qpair failed and we were unable to recover it. 00:34:49.280 [2024-07-14 09:44:33.482024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.280 [2024-07-14 09:44:33.482051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.280 qpair failed and we were unable to recover it. 00:34:49.280 [2024-07-14 09:44:33.482243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.280 [2024-07-14 09:44:33.482270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.280 qpair failed and we were unable to recover it. 00:34:49.280 [2024-07-14 09:44:33.482461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.280 [2024-07-14 09:44:33.482487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.280 qpair failed and we were unable to recover it. 00:34:49.280 [2024-07-14 09:44:33.482643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.280 [2024-07-14 09:44:33.482668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.280 qpair failed and we were unable to recover it. 00:34:49.280 [2024-07-14 09:44:33.482859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.280 [2024-07-14 09:44:33.482899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.280 qpair failed and we were unable to recover it. 00:34:49.280 [2024-07-14 09:44:33.483069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.280 [2024-07-14 09:44:33.483095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.280 qpair failed and we were unable to recover it. 00:34:49.280 [2024-07-14 09:44:33.483285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.280 [2024-07-14 09:44:33.483310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.280 qpair failed and we were unable to recover it. 00:34:49.280 [2024-07-14 09:44:33.483503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.280 [2024-07-14 09:44:33.483529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.280 qpair failed and we were unable to recover it. 00:34:49.280 [2024-07-14 09:44:33.483718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.280 [2024-07-14 09:44:33.483745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.280 qpair failed and we were unable to recover it. 00:34:49.280 [2024-07-14 09:44:33.483928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.280 [2024-07-14 09:44:33.483955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.280 qpair failed and we were unable to recover it. 00:34:49.280 [2024-07-14 09:44:33.484145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.280 [2024-07-14 09:44:33.484171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.280 qpair failed and we were unable to recover it. 00:34:49.280 [2024-07-14 09:44:33.484273] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:49.280 [2024-07-14 09:44:33.484361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.280 [2024-07-14 09:44:33.484387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.280 qpair failed and we were unable to recover it. 00:34:49.280 [2024-07-14 09:44:33.484589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.280 [2024-07-14 09:44:33.484615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.280 qpair failed and we were unable to recover it. 00:34:49.280 [2024-07-14 09:44:33.484831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.280 [2024-07-14 09:44:33.484857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.280 qpair failed and we were unable to recover it. 00:34:49.280 [2024-07-14 09:44:33.485026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.280 [2024-07-14 09:44:33.485052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.280 qpair failed and we were unable to recover it. 00:34:49.280 [2024-07-14 09:44:33.485240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.280 [2024-07-14 09:44:33.485267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.280 qpair failed and we were unable to recover it. 00:34:49.280 [2024-07-14 09:44:33.485426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.280 [2024-07-14 09:44:33.485454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.280 qpair failed and we were unable to recover it. 00:34:49.280 [2024-07-14 09:44:33.485648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.280 [2024-07-14 09:44:33.485675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.280 qpair failed and we were unable to recover it. 00:34:49.280 [2024-07-14 09:44:33.485896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.281 [2024-07-14 09:44:33.485924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.281 qpair failed and we were unable to recover it. 00:34:49.281 [2024-07-14 09:44:33.486112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.281 [2024-07-14 09:44:33.486138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.281 qpair failed and we were unable to recover it. 00:34:49.281 [2024-07-14 09:44:33.486333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.281 [2024-07-14 09:44:33.486359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.281 qpair failed and we were unable to recover it. 00:34:49.281 [2024-07-14 09:44:33.486518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.281 [2024-07-14 09:44:33.486546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.281 qpair failed and we were unable to recover it. 00:34:49.281 [2024-07-14 09:44:33.486743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.281 [2024-07-14 09:44:33.486770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.281 qpair failed and we were unable to recover it. 00:34:49.281 [2024-07-14 09:44:33.486993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.281 [2024-07-14 09:44:33.487021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.281 qpair failed and we were unable to recover it. 00:34:49.281 [2024-07-14 09:44:33.487210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.281 [2024-07-14 09:44:33.487237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.281 qpair failed and we were unable to recover it. 00:34:49.281 [2024-07-14 09:44:33.487437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.281 [2024-07-14 09:44:33.487463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.281 qpair failed and we were unable to recover it. 00:34:49.281 [2024-07-14 09:44:33.487655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.281 [2024-07-14 09:44:33.487681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.281 qpair failed and we were unable to recover it. 00:34:49.281 [2024-07-14 09:44:33.487899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.281 [2024-07-14 09:44:33.487927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.281 qpair failed and we were unable to recover it. 00:34:49.281 [2024-07-14 09:44:33.488143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.281 [2024-07-14 09:44:33.488170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.281 qpair failed and we were unable to recover it. 00:34:49.281 [2024-07-14 09:44:33.488385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.281 [2024-07-14 09:44:33.488412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.281 qpair failed and we were unable to recover it. 00:34:49.281 [2024-07-14 09:44:33.488603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.281 [2024-07-14 09:44:33.488629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.281 qpair failed and we were unable to recover it. 00:34:49.281 [2024-07-14 09:44:33.488849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.281 [2024-07-14 09:44:33.488882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.281 qpair failed and we were unable to recover it. 00:34:49.281 [2024-07-14 09:44:33.489104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.281 [2024-07-14 09:44:33.489131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.281 qpair failed and we were unable to recover it. 00:34:49.281 [2024-07-14 09:44:33.489349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.281 [2024-07-14 09:44:33.489380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.281 qpair failed and we were unable to recover it. 00:34:49.281 [2024-07-14 09:44:33.489574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.281 [2024-07-14 09:44:33.489601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.281 qpair failed and we were unable to recover it. 00:34:49.281 [2024-07-14 09:44:33.489794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.281 [2024-07-14 09:44:33.489820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.281 qpair failed and we were unable to recover it. 00:34:49.281 [2024-07-14 09:44:33.490021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.281 [2024-07-14 09:44:33.490048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.281 qpair failed and we were unable to recover it. 00:34:49.281 [2024-07-14 09:44:33.490241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.281 [2024-07-14 09:44:33.490269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.281 qpair failed and we were unable to recover it. 00:34:49.281 [2024-07-14 09:44:33.490461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.281 [2024-07-14 09:44:33.490488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.281 qpair failed and we were unable to recover it. 00:34:49.281 [2024-07-14 09:44:33.490702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.281 [2024-07-14 09:44:33.490729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.281 qpair failed and we were unable to recover it. 00:34:49.281 [2024-07-14 09:44:33.490951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.281 [2024-07-14 09:44:33.490979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.281 qpair failed and we were unable to recover it. 00:34:49.281 [2024-07-14 09:44:33.491142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.281 [2024-07-14 09:44:33.491169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.281 qpair failed and we were unable to recover it. 00:34:49.281 [2024-07-14 09:44:33.491383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.281 [2024-07-14 09:44:33.491409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.281 qpair failed and we were unable to recover it. 00:34:49.281 [2024-07-14 09:44:33.491602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.281 [2024-07-14 09:44:33.491629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.281 qpair failed and we were unable to recover it. 00:34:49.281 [2024-07-14 09:44:33.491793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.281 [2024-07-14 09:44:33.491819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.281 qpair failed and we were unable to recover it. 00:34:49.281 [2024-07-14 09:44:33.492162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.281 [2024-07-14 09:44:33.492189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.281 qpair failed and we were unable to recover it. 00:34:49.281 [2024-07-14 09:44:33.492384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.281 [2024-07-14 09:44:33.492410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.281 qpair failed and we were unable to recover it. 00:34:49.281 [2024-07-14 09:44:33.492613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.281 [2024-07-14 09:44:33.492639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.281 qpair failed and we were unable to recover it. 00:34:49.281 [2024-07-14 09:44:33.492833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.281 [2024-07-14 09:44:33.492860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.281 qpair failed and we were unable to recover it. 00:34:49.281 [2024-07-14 09:44:33.493045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.281 [2024-07-14 09:44:33.493073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.281 qpair failed and we were unable to recover it. 00:34:49.281 [2024-07-14 09:44:33.493228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.281 [2024-07-14 09:44:33.493256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.281 qpair failed and we were unable to recover it. 00:34:49.281 [2024-07-14 09:44:33.493477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.281 [2024-07-14 09:44:33.493505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.281 qpair failed and we were unable to recover it. 00:34:49.281 [2024-07-14 09:44:33.493723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.281 [2024-07-14 09:44:33.493750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.281 qpair failed and we were unable to recover it. 00:34:49.281 [2024-07-14 09:44:33.493915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.281 [2024-07-14 09:44:33.493942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.281 qpair failed and we were unable to recover it. 00:34:49.281 [2024-07-14 09:44:33.494115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.281 [2024-07-14 09:44:33.494142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.281 qpair failed and we were unable to recover it. 00:34:49.281 [2024-07-14 09:44:33.494306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.281 [2024-07-14 09:44:33.494333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.281 qpair failed and we were unable to recover it. 00:34:49.281 [2024-07-14 09:44:33.494549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.281 [2024-07-14 09:44:33.494575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.281 qpair failed and we were unable to recover it. 00:34:49.281 [2024-07-14 09:44:33.494773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.281 [2024-07-14 09:44:33.494799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.281 qpair failed and we were unable to recover it. 00:34:49.281 [2024-07-14 09:44:33.494973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.281 [2024-07-14 09:44:33.495000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.281 qpair failed and we were unable to recover it. 00:34:49.281 [2024-07-14 09:44:33.495251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.282 [2024-07-14 09:44:33.495278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.282 qpair failed and we were unable to recover it. 00:34:49.282 [2024-07-14 09:44:33.495503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.282 [2024-07-14 09:44:33.495530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.282 qpair failed and we were unable to recover it. 00:34:49.282 [2024-07-14 09:44:33.495713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.282 [2024-07-14 09:44:33.495739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.282 qpair failed and we were unable to recover it. 00:34:49.282 [2024-07-14 09:44:33.495905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.282 [2024-07-14 09:44:33.495932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.282 qpair failed and we were unable to recover it. 00:34:49.282 [2024-07-14 09:44:33.496129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.282 [2024-07-14 09:44:33.496158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.282 qpair failed and we were unable to recover it. 00:34:49.282 [2024-07-14 09:44:33.496328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.282 [2024-07-14 09:44:33.496356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.282 qpair failed and we were unable to recover it. 00:34:49.282 [2024-07-14 09:44:33.496656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.282 [2024-07-14 09:44:33.496682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.282 qpair failed and we were unable to recover it. 00:34:49.282 [2024-07-14 09:44:33.496883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.282 [2024-07-14 09:44:33.496911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.282 qpair failed and we were unable to recover it. 00:34:49.282 [2024-07-14 09:44:33.497131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.282 [2024-07-14 09:44:33.497157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.282 qpair failed and we were unable to recover it. 00:34:49.282 [2024-07-14 09:44:33.497385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.282 [2024-07-14 09:44:33.497412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.282 qpair failed and we were unable to recover it. 00:34:49.282 [2024-07-14 09:44:33.497634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.282 [2024-07-14 09:44:33.497660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.282 qpair failed and we were unable to recover it. 00:34:49.282 [2024-07-14 09:44:33.497829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.282 [2024-07-14 09:44:33.497856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.282 qpair failed and we were unable to recover it. 00:34:49.282 [2024-07-14 09:44:33.498061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.282 [2024-07-14 09:44:33.498088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.282 qpair failed and we were unable to recover it. 00:34:49.282 [2024-07-14 09:44:33.498291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.282 [2024-07-14 09:44:33.498317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.282 qpair failed and we were unable to recover it. 00:34:49.282 [2024-07-14 09:44:33.498515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.282 [2024-07-14 09:44:33.498546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.282 qpair failed and we were unable to recover it. 00:34:49.282 [2024-07-14 09:44:33.498735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.282 [2024-07-14 09:44:33.498763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.282 qpair failed and we were unable to recover it. 00:34:49.282 [2024-07-14 09:44:33.498930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.282 [2024-07-14 09:44:33.498958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.282 qpair failed and we were unable to recover it. 00:34:49.282 [2024-07-14 09:44:33.499145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.282 [2024-07-14 09:44:33.499173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.282 qpair failed and we were unable to recover it. 00:34:49.282 [2024-07-14 09:44:33.499340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.282 [2024-07-14 09:44:33.499368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.282 qpair failed and we were unable to recover it. 00:34:49.282 [2024-07-14 09:44:33.499530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.282 [2024-07-14 09:44:33.499556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.282 qpair failed and we were unable to recover it. 00:34:49.282 [2024-07-14 09:44:33.499717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.282 [2024-07-14 09:44:33.499743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.282 qpair failed and we were unable to recover it. 00:34:49.282 [2024-07-14 09:44:33.499961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.282 [2024-07-14 09:44:33.499988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.282 qpair failed and we were unable to recover it. 00:34:49.282 [2024-07-14 09:44:33.500178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.282 [2024-07-14 09:44:33.500204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.282 qpair failed and we were unable to recover it. 00:34:49.282 [2024-07-14 09:44:33.500421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.282 [2024-07-14 09:44:33.500447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.282 qpair failed and we were unable to recover it. 00:34:49.282 [2024-07-14 09:44:33.500631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.282 [2024-07-14 09:44:33.500657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.282 qpair failed and we were unable to recover it. 00:34:49.282 [2024-07-14 09:44:33.500846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.282 [2024-07-14 09:44:33.500880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.282 qpair failed and we were unable to recover it. 00:34:49.282 [2024-07-14 09:44:33.501077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.282 [2024-07-14 09:44:33.501104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.282 qpair failed and we were unable to recover it. 00:34:49.282 [2024-07-14 09:44:33.501267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.282 [2024-07-14 09:44:33.501293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.282 qpair failed and we were unable to recover it. 00:34:49.282 [2024-07-14 09:44:33.501492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.282 [2024-07-14 09:44:33.501519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.282 qpair failed and we were unable to recover it. 00:34:49.282 [2024-07-14 09:44:33.501712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.282 [2024-07-14 09:44:33.501739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.282 qpair failed and we were unable to recover it. 00:34:49.282 [2024-07-14 09:44:33.501936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.282 [2024-07-14 09:44:33.501964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.282 qpair failed and we were unable to recover it. 00:34:49.282 [2024-07-14 09:44:33.502137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.282 [2024-07-14 09:44:33.502179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.282 qpair failed and we were unable to recover it. 00:34:49.282 [2024-07-14 09:44:33.502404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.282 [2024-07-14 09:44:33.502430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.282 qpair failed and we were unable to recover it. 00:34:49.282 [2024-07-14 09:44:33.502614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.282 [2024-07-14 09:44:33.502640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.282 qpair failed and we were unable to recover it. 00:34:49.282 [2024-07-14 09:44:33.502816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.282 [2024-07-14 09:44:33.502842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.282 qpair failed and we were unable to recover it. 00:34:49.282 [2024-07-14 09:44:33.503082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.282 [2024-07-14 09:44:33.503110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.282 qpair failed and we were unable to recover it. 00:34:49.282 [2024-07-14 09:44:33.503303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.283 [2024-07-14 09:44:33.503330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.283 qpair failed and we were unable to recover it. 00:34:49.283 [2024-07-14 09:44:33.503532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.283 [2024-07-14 09:44:33.503559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.283 qpair failed and we were unable to recover it. 00:34:49.283 [2024-07-14 09:44:33.503781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.283 [2024-07-14 09:44:33.503809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.283 qpair failed and we were unable to recover it. 00:34:49.283 [2024-07-14 09:44:33.504003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.283 [2024-07-14 09:44:33.504031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.283 qpair failed and we were unable to recover it. 00:34:49.283 [2024-07-14 09:44:33.504199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.283 [2024-07-14 09:44:33.504225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.283 qpair failed and we were unable to recover it. 00:34:49.283 [2024-07-14 09:44:33.504398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.283 [2024-07-14 09:44:33.504425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.283 qpair failed and we were unable to recover it. 00:34:49.283 [2024-07-14 09:44:33.504641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.283 [2024-07-14 09:44:33.504667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.283 qpair failed and we were unable to recover it. 00:34:49.283 [2024-07-14 09:44:33.504839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.283 [2024-07-14 09:44:33.504873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.283 qpair failed and we were unable to recover it. 00:34:49.283 [2024-07-14 09:44:33.505060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.283 [2024-07-14 09:44:33.505088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.283 qpair failed and we were unable to recover it. 00:34:49.283 [2024-07-14 09:44:33.505250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.283 [2024-07-14 09:44:33.505278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.283 qpair failed and we were unable to recover it. 00:34:49.283 [2024-07-14 09:44:33.505493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.283 [2024-07-14 09:44:33.505520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.283 qpair failed and we were unable to recover it. 00:34:49.283 [2024-07-14 09:44:33.505708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.283 [2024-07-14 09:44:33.505735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.283 qpair failed and we were unable to recover it. 00:34:49.283 [2024-07-14 09:44:33.505903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.283 [2024-07-14 09:44:33.505929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.283 qpair failed and we were unable to recover it. 00:34:49.283 [2024-07-14 09:44:33.506119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.283 [2024-07-14 09:44:33.506147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.283 qpair failed and we were unable to recover it. 00:34:49.283 [2024-07-14 09:44:33.506346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.283 [2024-07-14 09:44:33.506372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.283 qpair failed and we were unable to recover it. 00:34:49.283 [2024-07-14 09:44:33.506563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.283 [2024-07-14 09:44:33.506589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.283 qpair failed and we were unable to recover it. 00:34:49.283 [2024-07-14 09:44:33.506776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.283 [2024-07-14 09:44:33.506802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.283 qpair failed and we were unable to recover it. 00:34:49.283 [2024-07-14 09:44:33.506979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.283 [2024-07-14 09:44:33.507007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.283 qpair failed and we were unable to recover it. 00:34:49.283 [2024-07-14 09:44:33.507226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.283 [2024-07-14 09:44:33.507258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.283 qpair failed and we were unable to recover it. 00:34:49.283 [2024-07-14 09:44:33.507421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.283 [2024-07-14 09:44:33.507447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.283 qpair failed and we were unable to recover it. 00:34:49.283 [2024-07-14 09:44:33.507616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.283 [2024-07-14 09:44:33.507658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.283 qpair failed and we were unable to recover it. 00:34:49.283 [2024-07-14 09:44:33.507888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.283 [2024-07-14 09:44:33.507915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.283 qpair failed and we were unable to recover it. 00:34:49.283 [2024-07-14 09:44:33.508111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.283 [2024-07-14 09:44:33.508138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.283 qpair failed and we were unable to recover it. 00:34:49.283 [2024-07-14 09:44:33.508327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.283 [2024-07-14 09:44:33.508353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.283 qpair failed and we were unable to recover it. 00:34:49.283 [2024-07-14 09:44:33.508545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.283 [2024-07-14 09:44:33.508572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.283 qpair failed and we were unable to recover it. 00:34:49.283 [2024-07-14 09:44:33.508757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.283 [2024-07-14 09:44:33.508784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.283 qpair failed and we were unable to recover it. 00:34:49.283 [2024-07-14 09:44:33.508977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.283 [2024-07-14 09:44:33.509004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.283 qpair failed and we were unable to recover it. 00:34:49.283 [2024-07-14 09:44:33.509220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.283 [2024-07-14 09:44:33.509247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.283 qpair failed and we were unable to recover it. 00:34:49.283 [2024-07-14 09:44:33.509441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.283 [2024-07-14 09:44:33.509469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.283 qpair failed and we were unable to recover it. 00:34:49.283 [2024-07-14 09:44:33.509639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.283 [2024-07-14 09:44:33.509664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.283 qpair failed and we were unable to recover it. 00:34:49.283 [2024-07-14 09:44:33.509890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.283 [2024-07-14 09:44:33.509917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.283 qpair failed and we were unable to recover it. 00:34:49.283 [2024-07-14 09:44:33.510077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.283 [2024-07-14 09:44:33.510105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.283 qpair failed and we were unable to recover it. 00:34:49.283 [2024-07-14 09:44:33.510461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.283 [2024-07-14 09:44:33.510487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.283 qpair failed and we were unable to recover it. 00:34:49.283 [2024-07-14 09:44:33.510659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.283 [2024-07-14 09:44:33.510686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.283 qpair failed and we were unable to recover it. 00:34:49.283 [2024-07-14 09:44:33.510897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.283 [2024-07-14 09:44:33.510925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.283 qpair failed and we were unable to recover it. 00:34:49.283 [2024-07-14 09:44:33.511116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.283 [2024-07-14 09:44:33.511142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.283 qpair failed and we were unable to recover it. 00:34:49.283 [2024-07-14 09:44:33.511330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.283 [2024-07-14 09:44:33.511357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.283 qpair failed and we were unable to recover it. 00:34:49.283 [2024-07-14 09:44:33.511521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.283 [2024-07-14 09:44:33.511547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.283 qpair failed and we were unable to recover it. 00:34:49.283 [2024-07-14 09:44:33.511715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.283 [2024-07-14 09:44:33.511741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.283 qpair failed and we were unable to recover it. 00:34:49.283 [2024-07-14 09:44:33.511970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.283 [2024-07-14 09:44:33.511997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.283 qpair failed and we were unable to recover it. 00:34:49.283 [2024-07-14 09:44:33.512196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.283 [2024-07-14 09:44:33.512221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.283 qpair failed and we were unable to recover it. 00:34:49.283 [2024-07-14 09:44:33.512418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.284 [2024-07-14 09:44:33.512445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.284 qpair failed and we were unable to recover it. 00:34:49.284 [2024-07-14 09:44:33.512608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.284 [2024-07-14 09:44:33.512635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.284 qpair failed and we were unable to recover it. 00:34:49.284 [2024-07-14 09:44:33.512827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.284 [2024-07-14 09:44:33.512854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.284 qpair failed and we were unable to recover it. 00:34:49.284 [2024-07-14 09:44:33.513044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.284 [2024-07-14 09:44:33.513071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.284 qpair failed and we were unable to recover it. 00:34:49.284 [2024-07-14 09:44:33.513277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.284 [2024-07-14 09:44:33.513315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.284 qpair failed and we were unable to recover it. 00:34:49.284 [2024-07-14 09:44:33.513475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.284 [2024-07-14 09:44:33.513501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.284 qpair failed and we were unable to recover it. 00:34:49.284 [2024-07-14 09:44:33.513772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.284 [2024-07-14 09:44:33.513798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.284 qpair failed and we were unable to recover it. 00:34:49.284 [2024-07-14 09:44:33.514013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.284 [2024-07-14 09:44:33.514040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.284 qpair failed and we were unable to recover it. 00:34:49.284 [2024-07-14 09:44:33.514228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.284 [2024-07-14 09:44:33.514255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.284 qpair failed and we were unable to recover it. 00:34:49.284 [2024-07-14 09:44:33.514441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.284 [2024-07-14 09:44:33.514467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.284 qpair failed and we were unable to recover it. 00:34:49.284 [2024-07-14 09:44:33.514653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.284 [2024-07-14 09:44:33.514680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.284 qpair failed and we were unable to recover it. 00:34:49.284 [2024-07-14 09:44:33.514880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.284 [2024-07-14 09:44:33.514910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.284 qpair failed and we were unable to recover it. 00:34:49.284 [2024-07-14 09:44:33.515080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.284 [2024-07-14 09:44:33.515106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.284 qpair failed and we were unable to recover it. 00:34:49.284 [2024-07-14 09:44:33.515296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.284 [2024-07-14 09:44:33.515323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.284 qpair failed and we were unable to recover it. 00:34:49.284 [2024-07-14 09:44:33.515522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.284 [2024-07-14 09:44:33.515554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.284 qpair failed and we were unable to recover it. 00:34:49.284 [2024-07-14 09:44:33.515771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.284 [2024-07-14 09:44:33.515798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.284 qpair failed and we were unable to recover it. 00:34:49.284 [2024-07-14 09:44:33.516014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.284 [2024-07-14 09:44:33.516042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.284 qpair failed and we were unable to recover it. 00:34:49.284 [2024-07-14 09:44:33.516233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.284 [2024-07-14 09:44:33.516264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.284 qpair failed and we were unable to recover it. 00:34:49.284 [2024-07-14 09:44:33.516484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.284 [2024-07-14 09:44:33.516511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.284 qpair failed and we were unable to recover it. 00:34:49.284 [2024-07-14 09:44:33.516679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.284 [2024-07-14 09:44:33.516705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.284 qpair failed and we were unable to recover it. 00:34:49.284 [2024-07-14 09:44:33.516896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.284 [2024-07-14 09:44:33.516923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.284 qpair failed and we were unable to recover it. 00:34:49.284 [2024-07-14 09:44:33.517151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.284 [2024-07-14 09:44:33.517178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.284 qpair failed and we were unable to recover it. 00:34:49.284 [2024-07-14 09:44:33.517370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.284 [2024-07-14 09:44:33.517396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.284 qpair failed and we were unable to recover it. 00:34:49.284 [2024-07-14 09:44:33.517596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.284 [2024-07-14 09:44:33.517622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.284 qpair failed and we were unable to recover it. 00:34:49.284 [2024-07-14 09:44:33.517780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.284 [2024-07-14 09:44:33.517806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.284 qpair failed and we were unable to recover it. 00:34:49.284 [2024-07-14 09:44:33.517999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.284 [2024-07-14 09:44:33.518026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.284 qpair failed and we were unable to recover it. 00:34:49.284 [2024-07-14 09:44:33.518216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.284 [2024-07-14 09:44:33.518243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.284 qpair failed and we were unable to recover it. 00:34:49.284 [2024-07-14 09:44:33.518402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.284 [2024-07-14 09:44:33.518428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.284 qpair failed and we were unable to recover it. 00:34:49.284 [2024-07-14 09:44:33.518613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.284 [2024-07-14 09:44:33.518640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.284 qpair failed and we were unable to recover it. 00:34:49.284 [2024-07-14 09:44:33.518856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.284 [2024-07-14 09:44:33.518896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.284 qpair failed and we were unable to recover it. 00:34:49.284 [2024-07-14 09:44:33.519109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.284 [2024-07-14 09:44:33.519135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.284 qpair failed and we were unable to recover it. 00:34:49.284 [2024-07-14 09:44:33.519339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.284 [2024-07-14 09:44:33.519365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.284 qpair failed and we were unable to recover it. 00:34:49.284 [2024-07-14 09:44:33.519562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.284 [2024-07-14 09:44:33.519589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.284 qpair failed and we were unable to recover it. 00:34:49.284 [2024-07-14 09:44:33.519778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.284 [2024-07-14 09:44:33.519805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.284 qpair failed and we were unable to recover it. 00:34:49.284 [2024-07-14 09:44:33.519973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.284 [2024-07-14 09:44:33.520000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.284 qpair failed and we were unable to recover it. 00:34:49.284 [2024-07-14 09:44:33.520197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.284 [2024-07-14 09:44:33.520225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.284 qpair failed and we were unable to recover it. 00:34:49.284 [2024-07-14 09:44:33.520391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.284 [2024-07-14 09:44:33.520417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.284 qpair failed and we were unable to recover it. 00:34:49.284 [2024-07-14 09:44:33.520618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.284 [2024-07-14 09:44:33.520644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.284 qpair failed and we were unable to recover it. 00:34:49.284 [2024-07-14 09:44:33.520830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.284 [2024-07-14 09:44:33.520856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.284 qpair failed and we were unable to recover it. 00:34:49.284 [2024-07-14 09:44:33.521057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.284 [2024-07-14 09:44:33.521084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.284 qpair failed and we were unable to recover it. 00:34:49.284 [2024-07-14 09:44:33.521276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.284 [2024-07-14 09:44:33.521304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.284 qpair failed and we were unable to recover it. 00:34:49.284 [2024-07-14 09:44:33.521484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.285 [2024-07-14 09:44:33.521510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.285 qpair failed and we were unable to recover it. 00:34:49.285 [2024-07-14 09:44:33.521705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.285 [2024-07-14 09:44:33.521731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.285 qpair failed and we were unable to recover it. 00:34:49.285 [2024-07-14 09:44:33.521893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.285 [2024-07-14 09:44:33.521935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.285 qpair failed and we were unable to recover it. 00:34:49.285 [2024-07-14 09:44:33.522149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.285 [2024-07-14 09:44:33.522176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.285 qpair failed and we were unable to recover it. 00:34:49.285 [2024-07-14 09:44:33.522371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.285 [2024-07-14 09:44:33.522399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.285 qpair failed and we were unable to recover it. 00:34:49.285 [2024-07-14 09:44:33.522615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.285 [2024-07-14 09:44:33.522642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.285 qpair failed and we were unable to recover it. 00:34:49.285 [2024-07-14 09:44:33.522796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.285 [2024-07-14 09:44:33.522822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.285 qpair failed and we were unable to recover it. 00:34:49.285 [2024-07-14 09:44:33.522992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.285 [2024-07-14 09:44:33.523021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.285 qpair failed and we were unable to recover it. 00:34:49.285 [2024-07-14 09:44:33.523241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.285 [2024-07-14 09:44:33.523268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.285 qpair failed and we were unable to recover it. 00:34:49.285 [2024-07-14 09:44:33.523489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.285 [2024-07-14 09:44:33.523516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.285 qpair failed and we were unable to recover it. 00:34:49.285 [2024-07-14 09:44:33.523706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.285 [2024-07-14 09:44:33.523733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.285 qpair failed and we were unable to recover it. 00:34:49.285 [2024-07-14 09:44:33.523974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.285 [2024-07-14 09:44:33.524001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.285 qpair failed and we were unable to recover it. 00:34:49.285 [2024-07-14 09:44:33.524208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.285 [2024-07-14 09:44:33.524234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.285 qpair failed and we were unable to recover it. 00:34:49.285 [2024-07-14 09:44:33.524430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.285 [2024-07-14 09:44:33.524458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.285 qpair failed and we were unable to recover it. 00:34:49.285 [2024-07-14 09:44:33.524650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.285 [2024-07-14 09:44:33.524676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.285 qpair failed and we were unable to recover it. 00:34:49.285 [2024-07-14 09:44:33.524860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.285 [2024-07-14 09:44:33.524894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.285 qpair failed and we were unable to recover it. 00:34:49.285 [2024-07-14 09:44:33.525091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.285 [2024-07-14 09:44:33.525122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.285 qpair failed and we were unable to recover it. 00:34:49.285 [2024-07-14 09:44:33.525314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.285 [2024-07-14 09:44:33.525340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.285 qpair failed and we were unable to recover it. 00:34:49.285 [2024-07-14 09:44:33.525524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.285 [2024-07-14 09:44:33.525551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.285 qpair failed and we were unable to recover it. 00:34:49.285 [2024-07-14 09:44:33.525738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.285 [2024-07-14 09:44:33.525764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.285 qpair failed and we were unable to recover it. 00:34:49.285 [2024-07-14 09:44:33.525926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.285 [2024-07-14 09:44:33.525953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.285 qpair failed and we were unable to recover it. 00:34:49.285 [2024-07-14 09:44:33.526171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.285 [2024-07-14 09:44:33.526197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.285 qpair failed and we were unable to recover it. 00:34:49.285 [2024-07-14 09:44:33.526391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.285 [2024-07-14 09:44:33.526417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.285 qpair failed and we were unable to recover it. 00:34:49.285 [2024-07-14 09:44:33.526577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.285 [2024-07-14 09:44:33.526605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.285 qpair failed and we were unable to recover it. 00:34:49.285 [2024-07-14 09:44:33.526791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.285 [2024-07-14 09:44:33.526818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.285 qpair failed and we were unable to recover it. 00:34:49.285 [2024-07-14 09:44:33.527038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.285 [2024-07-14 09:44:33.527066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.285 qpair failed and we were unable to recover it. 00:34:49.285 [2024-07-14 09:44:33.527229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.285 [2024-07-14 09:44:33.527255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.285 qpair failed and we were unable to recover it. 00:34:49.285 [2024-07-14 09:44:33.527415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.285 [2024-07-14 09:44:33.527441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.285 qpair failed and we were unable to recover it. 00:34:49.285 [2024-07-14 09:44:33.527635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.285 [2024-07-14 09:44:33.527662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.285 qpair failed and we were unable to recover it. 00:34:49.285 [2024-07-14 09:44:33.527857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.285 [2024-07-14 09:44:33.527897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.285 qpair failed and we were unable to recover it. 00:34:49.285 [2024-07-14 09:44:33.528105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.285 [2024-07-14 09:44:33.528131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.285 qpair failed and we were unable to recover it. 00:34:49.285 [2024-07-14 09:44:33.528347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.285 [2024-07-14 09:44:33.528373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.285 qpair failed and we were unable to recover it. 00:34:49.285 [2024-07-14 09:44:33.528591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.285 [2024-07-14 09:44:33.528617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.285 qpair failed and we were unable to recover it. 00:34:49.285 [2024-07-14 09:44:33.528780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.285 [2024-07-14 09:44:33.528808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.285 qpair failed and we were unable to recover it. 00:34:49.285 [2024-07-14 09:44:33.529049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.285 [2024-07-14 09:44:33.529076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.285 qpair failed and we were unable to recover it. 00:34:49.285 [2024-07-14 09:44:33.529243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.285 [2024-07-14 09:44:33.529270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.285 qpair failed and we were unable to recover it. 00:34:49.285 [2024-07-14 09:44:33.529490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.285 [2024-07-14 09:44:33.529517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.285 qpair failed and we were unable to recover it. 00:34:49.285 [2024-07-14 09:44:33.529697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.285 [2024-07-14 09:44:33.529724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.285 qpair failed and we were unable to recover it. 00:34:49.285 [2024-07-14 09:44:33.529911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.285 [2024-07-14 09:44:33.529939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.285 qpair failed and we were unable to recover it. 00:34:49.285 [2024-07-14 09:44:33.530106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.285 [2024-07-14 09:44:33.530133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.285 qpair failed and we were unable to recover it. 00:34:49.285 [2024-07-14 09:44:33.530296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.285 [2024-07-14 09:44:33.530322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.285 qpair failed and we were unable to recover it. 00:34:49.285 [2024-07-14 09:44:33.530510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.286 [2024-07-14 09:44:33.530536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.286 qpair failed and we were unable to recover it. 00:34:49.286 [2024-07-14 09:44:33.530727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.286 [2024-07-14 09:44:33.530754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.286 qpair failed and we were unable to recover it. 00:34:49.286 [2024-07-14 09:44:33.530937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.286 [2024-07-14 09:44:33.530966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.286 qpair failed and we were unable to recover it. 00:34:49.286 [2024-07-14 09:44:33.531156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.286 [2024-07-14 09:44:33.531183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.286 qpair failed and we were unable to recover it. 00:34:49.286 [2024-07-14 09:44:33.531340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.286 [2024-07-14 09:44:33.531367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.286 qpair failed and we were unable to recover it. 00:34:49.286 [2024-07-14 09:44:33.531529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.286 [2024-07-14 09:44:33.531557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.286 qpair failed and we were unable to recover it. 00:34:49.286 [2024-07-14 09:44:33.531773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.286 [2024-07-14 09:44:33.531799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.286 qpair failed and we were unable to recover it. 00:34:49.286 [2024-07-14 09:44:33.532119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.286 [2024-07-14 09:44:33.532147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.286 qpair failed and we were unable to recover it. 00:34:49.286 [2024-07-14 09:44:33.532359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.286 [2024-07-14 09:44:33.532386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.286 qpair failed and we were unable to recover it. 00:34:49.286 [2024-07-14 09:44:33.532576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.286 [2024-07-14 09:44:33.532602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.286 qpair failed and we were unable to recover it. 00:34:49.286 [2024-07-14 09:44:33.532762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.286 [2024-07-14 09:44:33.532789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.286 qpair failed and we were unable to recover it. 00:34:49.286 [2024-07-14 09:44:33.532982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.286 [2024-07-14 09:44:33.533009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.286 qpair failed and we were unable to recover it. 00:34:49.286 [2024-07-14 09:44:33.533175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.286 [2024-07-14 09:44:33.533202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.286 qpair failed and we were unable to recover it. 00:34:49.286 [2024-07-14 09:44:33.533365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.286 [2024-07-14 09:44:33.533391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.286 qpair failed and we were unable to recover it. 00:34:49.286 [2024-07-14 09:44:33.533587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.286 [2024-07-14 09:44:33.533614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.286 qpair failed and we were unable to recover it. 00:34:49.286 [2024-07-14 09:44:33.533804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.286 [2024-07-14 09:44:33.533835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.286 qpair failed and we were unable to recover it. 00:34:49.286 [2024-07-14 09:44:33.534036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.286 [2024-07-14 09:44:33.534063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.286 qpair failed and we were unable to recover it. 00:34:49.286 [2024-07-14 09:44:33.534263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.286 [2024-07-14 09:44:33.534289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.286 qpair failed and we were unable to recover it. 00:34:49.286 [2024-07-14 09:44:33.534476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.286 [2024-07-14 09:44:33.534503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.286 qpair failed and we were unable to recover it. 00:34:49.286 [2024-07-14 09:44:33.534696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.286 [2024-07-14 09:44:33.534722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.286 qpair failed and we were unable to recover it. 00:34:49.286 [2024-07-14 09:44:33.534912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.286 [2024-07-14 09:44:33.534940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.286 qpair failed and we were unable to recover it. 00:34:49.286 [2024-07-14 09:44:33.535133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.286 [2024-07-14 09:44:33.535160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.286 qpair failed and we were unable to recover it. 00:34:49.286 [2024-07-14 09:44:33.535353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.286 [2024-07-14 09:44:33.535379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.286 qpair failed and we were unable to recover it. 00:34:49.286 [2024-07-14 09:44:33.535572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.286 [2024-07-14 09:44:33.535599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.286 qpair failed and we were unable to recover it. 00:34:49.286 [2024-07-14 09:44:33.535788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.286 [2024-07-14 09:44:33.535815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.286 qpair failed and we were unable to recover it. 00:34:49.286 [2024-07-14 09:44:33.536008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.286 [2024-07-14 09:44:33.536035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.286 qpair failed and we were unable to recover it. 00:34:49.286 [2024-07-14 09:44:33.536225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.286 [2024-07-14 09:44:33.536252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.286 qpair failed and we were unable to recover it. 00:34:49.286 [2024-07-14 09:44:33.536455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.286 [2024-07-14 09:44:33.536480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.286 qpair failed and we were unable to recover it. 00:34:49.286 [2024-07-14 09:44:33.536678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.286 [2024-07-14 09:44:33.536705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.286 qpair failed and we were unable to recover it. 00:34:49.286 [2024-07-14 09:44:33.536925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.286 [2024-07-14 09:44:33.536953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.286 qpair failed and we were unable to recover it. 00:34:49.286 [2024-07-14 09:44:33.537114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.286 [2024-07-14 09:44:33.537142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.286 qpair failed and we were unable to recover it. 00:34:49.286 [2024-07-14 09:44:33.537354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.286 [2024-07-14 09:44:33.537381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.286 qpair failed and we were unable to recover it. 00:34:49.286 [2024-07-14 09:44:33.537569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.286 [2024-07-14 09:44:33.537596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.286 qpair failed and we were unable to recover it. 00:34:49.286 [2024-07-14 09:44:33.537780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.286 [2024-07-14 09:44:33.537807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.286 qpair failed and we were unable to recover it. 00:34:49.286 [2024-07-14 09:44:33.538000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.286 [2024-07-14 09:44:33.538027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.286 qpair failed and we were unable to recover it. 00:34:49.286 [2024-07-14 09:44:33.538218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.286 [2024-07-14 09:44:33.538244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.286 qpair failed and we were unable to recover it. 00:34:49.287 [2024-07-14 09:44:33.538527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.287 [2024-07-14 09:44:33.538553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.287 qpair failed and we were unable to recover it. 00:34:49.287 [2024-07-14 09:44:33.538740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.287 [2024-07-14 09:44:33.538766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.287 qpair failed and we were unable to recover it. 00:34:49.287 [2024-07-14 09:44:33.538962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.287 [2024-07-14 09:44:33.538989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.287 qpair failed and we were unable to recover it. 00:34:49.287 [2024-07-14 09:44:33.539177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.287 [2024-07-14 09:44:33.539204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.287 qpair failed and we were unable to recover it. 00:34:49.287 [2024-07-14 09:44:33.539400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.287 [2024-07-14 09:44:33.539427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.287 qpair failed and we were unable to recover it. 00:34:49.287 [2024-07-14 09:44:33.539647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.287 [2024-07-14 09:44:33.539674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.287 qpair failed and we were unable to recover it. 00:34:49.287 [2024-07-14 09:44:33.539914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.287 [2024-07-14 09:44:33.539942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.287 qpair failed and we were unable to recover it. 00:34:49.287 [2024-07-14 09:44:33.540125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.287 [2024-07-14 09:44:33.540152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.287 qpair failed and we were unable to recover it. 00:34:49.287 [2024-07-14 09:44:33.540371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.287 [2024-07-14 09:44:33.540397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.287 qpair failed and we were unable to recover it. 00:34:49.287 [2024-07-14 09:44:33.540600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.287 [2024-07-14 09:44:33.540627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.287 qpair failed and we were unable to recover it. 00:34:49.287 [2024-07-14 09:44:33.540822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.287 [2024-07-14 09:44:33.540848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.287 qpair failed and we were unable to recover it. 00:34:49.287 [2024-07-14 09:44:33.541049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.287 [2024-07-14 09:44:33.541076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.287 qpair failed and we were unable to recover it. 00:34:49.287 [2024-07-14 09:44:33.541319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.287 [2024-07-14 09:44:33.541346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.287 qpair failed and we were unable to recover it. 00:34:49.287 [2024-07-14 09:44:33.541508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.287 [2024-07-14 09:44:33.541534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.287 qpair failed and we were unable to recover it. 00:34:49.287 [2024-07-14 09:44:33.541729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.287 [2024-07-14 09:44:33.541756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.287 qpair failed and we were unable to recover it. 00:34:49.287 [2024-07-14 09:44:33.541951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.287 [2024-07-14 09:44:33.541978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.287 qpair failed and we were unable to recover it. 00:34:49.287 [2024-07-14 09:44:33.542175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.287 [2024-07-14 09:44:33.542202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.287 qpair failed and we were unable to recover it. 00:34:49.287 [2024-07-14 09:44:33.542368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.287 [2024-07-14 09:44:33.542397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.287 qpair failed and we were unable to recover it. 00:34:49.287 [2024-07-14 09:44:33.542584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.287 [2024-07-14 09:44:33.542610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.287 qpair failed and we were unable to recover it. 00:34:49.287 [2024-07-14 09:44:33.542829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.287 [2024-07-14 09:44:33.542860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.287 qpair failed and we were unable to recover it. 00:34:49.287 [2024-07-14 09:44:33.543098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.287 [2024-07-14 09:44:33.543125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.287 qpair failed and we were unable to recover it. 00:34:49.287 [2024-07-14 09:44:33.543286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.287 [2024-07-14 09:44:33.543312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.287 qpair failed and we were unable to recover it. 00:34:49.287 [2024-07-14 09:44:33.543505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.287 [2024-07-14 09:44:33.543532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.287 qpair failed and we were unable to recover it. 00:34:49.287 [2024-07-14 09:44:33.543723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.287 [2024-07-14 09:44:33.543750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.287 qpair failed and we were unable to recover it. 00:34:49.287 [2024-07-14 09:44:33.543937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.287 [2024-07-14 09:44:33.543966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.287 qpair failed and we were unable to recover it. 00:34:49.287 [2024-07-14 09:44:33.544191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.287 [2024-07-14 09:44:33.544217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.287 qpair failed and we were unable to recover it. 00:34:49.287 [2024-07-14 09:44:33.544409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.287 [2024-07-14 09:44:33.544435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.287 qpair failed and we were unable to recover it. 00:34:49.287 [2024-07-14 09:44:33.544657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.287 [2024-07-14 09:44:33.544683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.287 qpair failed and we were unable to recover it. 00:34:49.287 [2024-07-14 09:44:33.544879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.287 [2024-07-14 09:44:33.544906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.287 qpair failed and we were unable to recover it. 00:34:49.287 [2024-07-14 09:44:33.545091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.287 [2024-07-14 09:44:33.545118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.287 qpair failed and we were unable to recover it. 00:34:49.287 [2024-07-14 09:44:33.545313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.287 [2024-07-14 09:44:33.545339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.287 qpair failed and we were unable to recover it. 00:34:49.287 [2024-07-14 09:44:33.545505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.287 [2024-07-14 09:44:33.545532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.287 qpair failed and we were unable to recover it. 00:34:49.287 [2024-07-14 09:44:33.545747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.287 [2024-07-14 09:44:33.545773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.287 qpair failed and we were unable to recover it. 00:34:49.287 [2024-07-14 09:44:33.545941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.287 [2024-07-14 09:44:33.545968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.287 qpair failed and we were unable to recover it. 00:34:49.287 [2024-07-14 09:44:33.546171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.287 [2024-07-14 09:44:33.546198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.287 qpair failed and we were unable to recover it. 00:34:49.287 [2024-07-14 09:44:33.546383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.287 [2024-07-14 09:44:33.546409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.287 qpair failed and we were unable to recover it. 00:34:49.287 [2024-07-14 09:44:33.546565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.287 [2024-07-14 09:44:33.546591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.287 qpair failed and we were unable to recover it. 00:34:49.287 [2024-07-14 09:44:33.546810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.287 [2024-07-14 09:44:33.546836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.287 qpair failed and we were unable to recover it. 00:34:49.287 [2024-07-14 09:44:33.547069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.287 [2024-07-14 09:44:33.547097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.287 qpair failed and we were unable to recover it. 00:34:49.287 [2024-07-14 09:44:33.547259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.287 [2024-07-14 09:44:33.547285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.287 qpair failed and we were unable to recover it. 00:34:49.287 [2024-07-14 09:44:33.547512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.287 [2024-07-14 09:44:33.547538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.287 qpair failed and we were unable to recover it. 00:34:49.288 [2024-07-14 09:44:33.547703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.288 [2024-07-14 09:44:33.547729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.288 qpair failed and we were unable to recover it. 00:34:49.288 [2024-07-14 09:44:33.547933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.288 [2024-07-14 09:44:33.547961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.288 qpair failed and we were unable to recover it. 00:34:49.288 [2024-07-14 09:44:33.548133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.288 [2024-07-14 09:44:33.548161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.288 qpair failed and we were unable to recover it. 00:34:49.288 [2024-07-14 09:44:33.548325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.288 [2024-07-14 09:44:33.548352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.288 qpair failed and we were unable to recover it. 00:34:49.288 [2024-07-14 09:44:33.548542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.288 [2024-07-14 09:44:33.548570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.288 qpair failed and we were unable to recover it. 00:34:49.288 [2024-07-14 09:44:33.548803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.288 [2024-07-14 09:44:33.548829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.288 qpair failed and we were unable to recover it. 00:34:49.288 [2024-07-14 09:44:33.549021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.288 [2024-07-14 09:44:33.549049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.288 qpair failed and we were unable to recover it. 00:34:49.288 [2024-07-14 09:44:33.549244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.288 [2024-07-14 09:44:33.549271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.288 qpair failed and we were unable to recover it. 00:34:49.288 [2024-07-14 09:44:33.549461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.288 [2024-07-14 09:44:33.549488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.288 qpair failed and we were unable to recover it. 00:34:49.288 [2024-07-14 09:44:33.549677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.288 [2024-07-14 09:44:33.549704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.288 qpair failed and we were unable to recover it. 00:34:49.288 [2024-07-14 09:44:33.549889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.288 [2024-07-14 09:44:33.549917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.288 qpair failed and we were unable to recover it. 00:34:49.288 [2024-07-14 09:44:33.550084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.288 [2024-07-14 09:44:33.550110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.288 qpair failed and we were unable to recover it. 00:34:49.288 [2024-07-14 09:44:33.550298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.288 [2024-07-14 09:44:33.550326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.288 qpair failed and we were unable to recover it. 00:34:49.288 [2024-07-14 09:44:33.550512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.288 [2024-07-14 09:44:33.550539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.288 qpair failed and we were unable to recover it. 00:34:49.288 [2024-07-14 09:44:33.550760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.288 [2024-07-14 09:44:33.550786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.288 qpair failed and we were unable to recover it. 00:34:49.288 [2024-07-14 09:44:33.550947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.288 [2024-07-14 09:44:33.550975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.288 qpair failed and we were unable to recover it. 00:34:49.288 [2024-07-14 09:44:33.551171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.288 [2024-07-14 09:44:33.551197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.288 qpair failed and we were unable to recover it. 00:34:49.288 [2024-07-14 09:44:33.551385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.288 [2024-07-14 09:44:33.551411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.288 qpair failed and we were unable to recover it. 00:34:49.288 [2024-07-14 09:44:33.551565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.288 [2024-07-14 09:44:33.551596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.288 qpair failed and we were unable to recover it. 00:34:49.288 [2024-07-14 09:44:33.551790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.288 [2024-07-14 09:44:33.551817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.288 qpair failed and we were unable to recover it. 00:34:49.288 [2024-07-14 09:44:33.552053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.288 [2024-07-14 09:44:33.552080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.288 qpair failed and we were unable to recover it. 00:34:49.288 [2024-07-14 09:44:33.552262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.288 [2024-07-14 09:44:33.552289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.288 qpair failed and we were unable to recover it. 00:34:49.288 [2024-07-14 09:44:33.552488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.288 [2024-07-14 09:44:33.552515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.288 qpair failed and we were unable to recover it. 00:34:49.288 [2024-07-14 09:44:33.552737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.288 [2024-07-14 09:44:33.552763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.288 qpair failed and we were unable to recover it. 00:34:49.288 [2024-07-14 09:44:33.552957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.288 [2024-07-14 09:44:33.552984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.288 qpair failed and we were unable to recover it. 00:34:49.288 [2024-07-14 09:44:33.553175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.288 [2024-07-14 09:44:33.553204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.288 qpair failed and we were unable to recover it. 00:34:49.288 [2024-07-14 09:44:33.553430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.288 [2024-07-14 09:44:33.553456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.288 qpair failed and we were unable to recover it. 00:34:49.288 [2024-07-14 09:44:33.553650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.288 [2024-07-14 09:44:33.553677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.288 qpair failed and we were unable to recover it. 00:34:49.288 [2024-07-14 09:44:33.553878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.288 [2024-07-14 09:44:33.553905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.288 qpair failed and we were unable to recover it. 00:34:49.288 [2024-07-14 09:44:33.554060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.288 [2024-07-14 09:44:33.554086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.288 qpair failed and we were unable to recover it. 00:34:49.288 [2024-07-14 09:44:33.554302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.288 [2024-07-14 09:44:33.554328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.288 qpair failed and we were unable to recover it. 00:34:49.288 [2024-07-14 09:44:33.554487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.288 [2024-07-14 09:44:33.554513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.288 qpair failed and we were unable to recover it. 00:34:49.288 [2024-07-14 09:44:33.554716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.288 [2024-07-14 09:44:33.554743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.288 qpair failed and we were unable to recover it. 00:34:49.288 [2024-07-14 09:44:33.554911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.288 [2024-07-14 09:44:33.554939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.288 qpair failed and we were unable to recover it. 00:34:49.288 [2024-07-14 09:44:33.555107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.288 [2024-07-14 09:44:33.555134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.288 qpair failed and we were unable to recover it. 00:34:49.288 [2024-07-14 09:44:33.555320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.288 [2024-07-14 09:44:33.555347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.288 qpair failed and we were unable to recover it. 00:34:49.288 [2024-07-14 09:44:33.555516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.288 [2024-07-14 09:44:33.555541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.288 qpair failed and we were unable to recover it. 00:34:49.288 [2024-07-14 09:44:33.555726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.288 [2024-07-14 09:44:33.555752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.288 qpair failed and we were unable to recover it. 00:34:49.288 [2024-07-14 09:44:33.555916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.288 [2024-07-14 09:44:33.555943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.288 qpair failed and we were unable to recover it. 00:34:49.288 [2024-07-14 09:44:33.556168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.288 [2024-07-14 09:44:33.556195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.288 qpair failed and we were unable to recover it. 00:34:49.288 [2024-07-14 09:44:33.556371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.288 [2024-07-14 09:44:33.556398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.288 qpair failed and we were unable to recover it. 00:34:49.289 [2024-07-14 09:44:33.556585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.289 [2024-07-14 09:44:33.556611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.289 qpair failed and we were unable to recover it. 00:34:49.289 [2024-07-14 09:44:33.556824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.289 [2024-07-14 09:44:33.556849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.289 qpair failed and we were unable to recover it. 00:34:49.289 [2024-07-14 09:44:33.557061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.289 [2024-07-14 09:44:33.557088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.289 qpair failed and we were unable to recover it. 00:34:49.289 [2024-07-14 09:44:33.557258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.289 [2024-07-14 09:44:33.557285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.289 qpair failed and we were unable to recover it. 00:34:49.289 [2024-07-14 09:44:33.557484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.289 [2024-07-14 09:44:33.557510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.289 qpair failed and we were unable to recover it. 00:34:49.289 [2024-07-14 09:44:33.557733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.289 [2024-07-14 09:44:33.557760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.289 qpair failed and we were unable to recover it. 00:34:49.289 [2024-07-14 09:44:33.557949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.289 [2024-07-14 09:44:33.557976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.289 qpair failed and we were unable to recover it. 00:34:49.289 [2024-07-14 09:44:33.558164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.289 [2024-07-14 09:44:33.558191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.289 qpair failed and we were unable to recover it. 00:34:49.289 [2024-07-14 09:44:33.558449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.289 [2024-07-14 09:44:33.558476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.289 qpair failed and we were unable to recover it. 00:34:49.289 [2024-07-14 09:44:33.558638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.289 [2024-07-14 09:44:33.558665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.289 qpair failed and we were unable to recover it. 00:34:49.289 [2024-07-14 09:44:33.558858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.289 [2024-07-14 09:44:33.558898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.289 qpair failed and we were unable to recover it. 00:34:49.289 [2024-07-14 09:44:33.559094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.289 [2024-07-14 09:44:33.559121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.289 qpair failed and we were unable to recover it. 00:34:49.289 [2024-07-14 09:44:33.559339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.289 [2024-07-14 09:44:33.559365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.289 qpair failed and we were unable to recover it. 00:34:49.289 [2024-07-14 09:44:33.559552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.289 [2024-07-14 09:44:33.559579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.289 qpair failed and we were unable to recover it. 00:34:49.289 [2024-07-14 09:44:33.559793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.289 [2024-07-14 09:44:33.559820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.289 qpair failed and we were unable to recover it. 00:34:49.289 [2024-07-14 09:44:33.560034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.289 [2024-07-14 09:44:33.560062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.289 qpair failed and we were unable to recover it. 00:34:49.289 [2024-07-14 09:44:33.560225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.289 [2024-07-14 09:44:33.560252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.289 qpair failed and we were unable to recover it. 00:34:49.289 [2024-07-14 09:44:33.560451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.289 [2024-07-14 09:44:33.560481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.289 qpair failed and we were unable to recover it. 00:34:49.289 [2024-07-14 09:44:33.560672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.289 [2024-07-14 09:44:33.560698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.289 qpair failed and we were unable to recover it. 00:34:49.289 [2024-07-14 09:44:33.560917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.289 [2024-07-14 09:44:33.560944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.289 qpair failed and we were unable to recover it. 00:34:49.289 [2024-07-14 09:44:33.561137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.289 [2024-07-14 09:44:33.561164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.289 qpair failed and we were unable to recover it. 00:34:49.289 [2024-07-14 09:44:33.561352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.289 [2024-07-14 09:44:33.561379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.289 qpair failed and we were unable to recover it. 00:34:49.289 [2024-07-14 09:44:33.561601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.289 [2024-07-14 09:44:33.561628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.289 qpair failed and we were unable to recover it. 00:34:49.289 [2024-07-14 09:44:33.561843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.289 [2024-07-14 09:44:33.561875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.289 qpair failed and we were unable to recover it. 00:34:49.289 [2024-07-14 09:44:33.562040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.289 [2024-07-14 09:44:33.562066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.289 qpair failed and we were unable to recover it. 00:34:49.289 [2024-07-14 09:44:33.562278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.289 [2024-07-14 09:44:33.562304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.289 qpair failed and we were unable to recover it. 00:34:49.289 [2024-07-14 09:44:33.562490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.289 [2024-07-14 09:44:33.562516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.289 qpair failed and we were unable to recover it. 00:34:49.289 [2024-07-14 09:44:33.562701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.289 [2024-07-14 09:44:33.562727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.289 qpair failed and we were unable to recover it. 00:34:49.289 [2024-07-14 09:44:33.562919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.289 [2024-07-14 09:44:33.562947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.289 qpair failed and we were unable to recover it. 00:34:49.289 [2024-07-14 09:44:33.563160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.289 [2024-07-14 09:44:33.563187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.289 qpair failed and we were unable to recover it. 00:34:49.289 [2024-07-14 09:44:33.563357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.289 [2024-07-14 09:44:33.563384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.289 qpair failed and we were unable to recover it. 00:34:49.289 [2024-07-14 09:44:33.563604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.289 [2024-07-14 09:44:33.563631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.289 qpair failed and we were unable to recover it. 00:34:49.289 [2024-07-14 09:44:33.563825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.289 [2024-07-14 09:44:33.563851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.289 qpair failed and we were unable to recover it. 00:34:49.289 [2024-07-14 09:44:33.564047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.289 [2024-07-14 09:44:33.564074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.289 qpair failed and we were unable to recover it. 00:34:49.289 [2024-07-14 09:44:33.564260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.289 [2024-07-14 09:44:33.564286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.289 qpair failed and we were unable to recover it. 00:34:49.289 [2024-07-14 09:44:33.564516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.289 [2024-07-14 09:44:33.564542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.289 qpair failed and we were unable to recover it. 00:34:49.289 [2024-07-14 09:44:33.564762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.289 [2024-07-14 09:44:33.564788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.289 qpair failed and we were unable to recover it. 00:34:49.289 [2024-07-14 09:44:33.564982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.289 [2024-07-14 09:44:33.565009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.289 qpair failed and we were unable to recover it. 00:34:49.289 [2024-07-14 09:44:33.565224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.289 [2024-07-14 09:44:33.565251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.289 qpair failed and we were unable to recover it. 00:34:49.289 [2024-07-14 09:44:33.565444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.289 [2024-07-14 09:44:33.565470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.289 qpair failed and we were unable to recover it. 00:34:49.289 [2024-07-14 09:44:33.565661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.289 [2024-07-14 09:44:33.565687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.289 qpair failed and we were unable to recover it. 00:34:49.290 [2024-07-14 09:44:33.565854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.290 [2024-07-14 09:44:33.565886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.290 qpair failed and we were unable to recover it. 00:34:49.290 [2024-07-14 09:44:33.566047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.290 [2024-07-14 09:44:33.566073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.290 qpair failed and we were unable to recover it. 00:34:49.290 [2024-07-14 09:44:33.566249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.290 [2024-07-14 09:44:33.566276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.290 qpair failed and we were unable to recover it. 00:34:49.290 [2024-07-14 09:44:33.566509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.290 [2024-07-14 09:44:33.566537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.290 qpair failed and we were unable to recover it. 00:34:49.290 [2024-07-14 09:44:33.566729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.290 [2024-07-14 09:44:33.566756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.290 qpair failed and we were unable to recover it. 00:34:49.290 [2024-07-14 09:44:33.566973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.290 [2024-07-14 09:44:33.567001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.290 qpair failed and we were unable to recover it. 00:34:49.290 [2024-07-14 09:44:33.567166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.290 [2024-07-14 09:44:33.567193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.290 qpair failed and we were unable to recover it. 00:34:49.290 [2024-07-14 09:44:33.567414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.290 [2024-07-14 09:44:33.567441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.290 qpair failed and we were unable to recover it. 00:34:49.290 [2024-07-14 09:44:33.567609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.290 [2024-07-14 09:44:33.567636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.290 qpair failed and we were unable to recover it. 00:34:49.290 [2024-07-14 09:44:33.567794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.290 [2024-07-14 09:44:33.567821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.290 qpair failed and we were unable to recover it. 00:34:49.290 [2024-07-14 09:44:33.568012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.290 [2024-07-14 09:44:33.568039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.290 qpair failed and we were unable to recover it. 00:34:49.290 [2024-07-14 09:44:33.568260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.290 [2024-07-14 09:44:33.568286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.290 qpair failed and we were unable to recover it. 00:34:49.290 [2024-07-14 09:44:33.568447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.290 [2024-07-14 09:44:33.568474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.290 qpair failed and we were unable to recover it. 00:34:49.290 [2024-07-14 09:44:33.568666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.290 [2024-07-14 09:44:33.568693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.290 qpair failed and we were unable to recover it. 00:34:49.290 [2024-07-14 09:44:33.568856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.290 [2024-07-14 09:44:33.568891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.290 qpair failed and we were unable to recover it. 00:34:49.290 [2024-07-14 09:44:33.569126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.290 [2024-07-14 09:44:33.569152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.290 qpair failed and we were unable to recover it. 00:34:49.290 [2024-07-14 09:44:33.569372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.290 [2024-07-14 09:44:33.569403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.290 qpair failed and we were unable to recover it. 00:34:49.290 [2024-07-14 09:44:33.569596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.290 [2024-07-14 09:44:33.569622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.290 qpair failed and we were unable to recover it. 00:34:49.290 [2024-07-14 09:44:33.569808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.290 [2024-07-14 09:44:33.569834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.290 qpair failed and we were unable to recover it. 00:34:49.290 [2024-07-14 09:44:33.570036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.290 [2024-07-14 09:44:33.570063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.290 qpair failed and we were unable to recover it. 00:34:49.290 [2024-07-14 09:44:33.570287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.290 [2024-07-14 09:44:33.570314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.290 qpair failed and we were unable to recover it. 00:34:49.290 [2024-07-14 09:44:33.570504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.290 [2024-07-14 09:44:33.570530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.290 qpair failed and we were unable to recover it. 00:34:49.290 [2024-07-14 09:44:33.570716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.290 [2024-07-14 09:44:33.570742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.290 qpair failed and we were unable to recover it. 00:34:49.290 [2024-07-14 09:44:33.570918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.290 [2024-07-14 09:44:33.570948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.290 qpair failed and we were unable to recover it. 00:34:49.290 [2024-07-14 09:44:33.571165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.290 [2024-07-14 09:44:33.571192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.290 qpair failed and we were unable to recover it. 00:34:49.290 [2024-07-14 09:44:33.571365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.290 [2024-07-14 09:44:33.571392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.290 qpair failed and we were unable to recover it. 00:34:49.290 [2024-07-14 09:44:33.571609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.290 [2024-07-14 09:44:33.571635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.290 qpair failed and we were unable to recover it. 00:34:49.290 [2024-07-14 09:44:33.571855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.290 [2024-07-14 09:44:33.571907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.290 qpair failed and we were unable to recover it. 00:34:49.290 [2024-07-14 09:44:33.572113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.290 [2024-07-14 09:44:33.572140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.290 qpair failed and we were unable to recover it. 00:34:49.290 [2024-07-14 09:44:33.572338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.290 [2024-07-14 09:44:33.572364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.290 qpair failed and we were unable to recover it. 00:34:49.290 [2024-07-14 09:44:33.572530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.290 [2024-07-14 09:44:33.572556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.290 qpair failed and we were unable to recover it. 00:34:49.290 [2024-07-14 09:44:33.572761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.290 [2024-07-14 09:44:33.572787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.290 qpair failed and we were unable to recover it. 00:34:49.290 [2024-07-14 09:44:33.572980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.290 [2024-07-14 09:44:33.573007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.290 qpair failed and we were unable to recover it. 00:34:49.290 [2024-07-14 09:44:33.573173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.290 [2024-07-14 09:44:33.573200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.290 qpair failed and we were unable to recover it. 00:34:49.290 [2024-07-14 09:44:33.573394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.290 [2024-07-14 09:44:33.573420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.290 qpair failed and we were unable to recover it. 00:34:49.290 [2024-07-14 09:44:33.573609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.290 [2024-07-14 09:44:33.573635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.290 qpair failed and we were unable to recover it. 00:34:49.290 [2024-07-14 09:44:33.573825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.291 [2024-07-14 09:44:33.573852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.291 qpair failed and we were unable to recover it. 00:34:49.291 [2024-07-14 09:44:33.574047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.291 [2024-07-14 09:44:33.574074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.291 qpair failed and we were unable to recover it. 00:34:49.291 [2024-07-14 09:44:33.574267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.291 [2024-07-14 09:44:33.574294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.291 qpair failed and we were unable to recover it. 00:34:49.291 [2024-07-14 09:44:33.574479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.291 [2024-07-14 09:44:33.574505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.291 qpair failed and we were unable to recover it. 00:34:49.291 [2024-07-14 09:44:33.574692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.291 [2024-07-14 09:44:33.574719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.291 qpair failed and we were unable to recover it. 00:34:49.291 [2024-07-14 09:44:33.574919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.291 [2024-07-14 09:44:33.574947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.291 qpair failed and we were unable to recover it. 00:34:49.291 [2024-07-14 09:44:33.575136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.291 [2024-07-14 09:44:33.575164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.291 qpair failed and we were unable to recover it. 00:34:49.291 [2024-07-14 09:44:33.575384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.291 [2024-07-14 09:44:33.575411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.291 qpair failed and we were unable to recover it. 00:34:49.291 [2024-07-14 09:44:33.575628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.291 [2024-07-14 09:44:33.575655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.291 qpair failed and we were unable to recover it. 00:34:49.291 [2024-07-14 09:44:33.575849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.291 [2024-07-14 09:44:33.575882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.291 qpair failed and we were unable to recover it. 00:34:49.291 [2024-07-14 09:44:33.576048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.291 [2024-07-14 09:44:33.576075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.291 qpair failed and we were unable to recover it. 00:34:49.291 [2024-07-14 09:44:33.576273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.291 [2024-07-14 09:44:33.576300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.291 qpair failed and we were unable to recover it. 00:34:49.291 [2024-07-14 09:44:33.576489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.291 [2024-07-14 09:44:33.576515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.291 qpair failed and we were unable to recover it. 00:34:49.291 [2024-07-14 09:44:33.576707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.291 [2024-07-14 09:44:33.576734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.291 qpair failed and we were unable to recover it. 00:34:49.291 [2024-07-14 09:44:33.577023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.291 [2024-07-14 09:44:33.577050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.291 qpair failed and we were unable to recover it. 00:34:49.291 [2024-07-14 09:44:33.577271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.291 [2024-07-14 09:44:33.577298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.291 qpair failed and we were unable to recover it. 00:34:49.291 [2024-07-14 09:44:33.577466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.291 [2024-07-14 09:44:33.577493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.291 qpair failed and we were unable to recover it. 00:34:49.291 [2024-07-14 09:44:33.577709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.291 [2024-07-14 09:44:33.577736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.291 qpair failed and we were unable to recover it. 00:34:49.291 [2024-07-14 09:44:33.577929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.291 [2024-07-14 09:44:33.577956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.291 qpair failed and we were unable to recover it. 00:34:49.291 [2024-07-14 09:44:33.578148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.291 [2024-07-14 09:44:33.578175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.291 qpair failed and we were unable to recover it. 00:34:49.291 [2024-07-14 09:44:33.578382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.291 [2024-07-14 09:44:33.578409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.291 qpair failed and we were unable to recover it. 00:34:49.291 [2024-07-14 09:44:33.578605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.291 [2024-07-14 09:44:33.578631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.291 qpair failed and we were unable to recover it. 00:34:49.291 [2024-07-14 09:44:33.578825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.291 [2024-07-14 09:44:33.578851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.291 qpair failed and we were unable to recover it. 00:34:49.291 [2024-07-14 09:44:33.579065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.291 [2024-07-14 09:44:33.579092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.291 qpair failed and we were unable to recover it. 00:34:49.291 [2024-07-14 09:44:33.579359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.291 [2024-07-14 09:44:33.579386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.291 qpair failed and we were unable to recover it. 00:34:49.291 [2024-07-14 09:44:33.579570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.291 [2024-07-14 09:44:33.579597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.291 qpair failed and we were unable to recover it. 00:34:49.291 [2024-07-14 09:44:33.579760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.291 [2024-07-14 09:44:33.579787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.291 qpair failed and we were unable to recover it. 00:34:49.291 [2024-07-14 09:44:33.579956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.291 [2024-07-14 09:44:33.579983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.291 qpair failed and we were unable to recover it. 00:34:49.291 [2024-07-14 09:44:33.580144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.291 [2024-07-14 09:44:33.580170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.291 qpair failed and we were unable to recover it. 00:34:49.291 [2024-07-14 09:44:33.580360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.291 [2024-07-14 09:44:33.580387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.291 qpair failed and we were unable to recover it. 00:34:49.291 [2024-07-14 09:44:33.580545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.291 [2024-07-14 09:44:33.580571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.291 qpair failed and we were unable to recover it. 00:34:49.291 [2024-07-14 09:44:33.580736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.291 [2024-07-14 09:44:33.580762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.291 qpair failed and we were unable to recover it. 00:34:49.291 [2024-07-14 09:44:33.580948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.291 [2024-07-14 09:44:33.580975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.291 [2024-07-14 09:44:33.580973] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:49.291 qpair failed and we were unable to recover it. 00:34:49.291 [2024-07-14 09:44:33.581008] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:49.291 [2024-07-14 09:44:33.581028] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:49.291 [2024-07-14 09:44:33.581041] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:49.291 [2024-07-14 09:44:33.581051] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:49.291 [2024-07-14 09:44:33.581108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:34:49.291 [2024-07-14 09:44:33.581191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.291 [2024-07-14 09:44:33.581138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:34:49.291 [2024-07-14 09:44:33.581231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.291 qpair failed and we were unable to recover it. 00:34:49.291 [2024-07-14 09:44:33.581183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:34:49.291 [2024-07-14 09:44:33.581186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:34:49.291 [2024-07-14 09:44:33.581412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.291 [2024-07-14 09:44:33.581437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.291 qpair failed and we were unable to recover it. 00:34:49.291 [2024-07-14 09:44:33.581602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.291 [2024-07-14 09:44:33.581629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.291 qpair failed and we were unable to recover it. 00:34:49.291 [2024-07-14 09:44:33.581817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.291 [2024-07-14 09:44:33.581842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.291 qpair failed and we were unable to recover it. 00:34:49.291 [2024-07-14 09:44:33.582012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.291 [2024-07-14 09:44:33.582039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.291 qpair failed and we were unable to recover it. 00:34:49.292 [2024-07-14 09:44:33.582237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.292 [2024-07-14 09:44:33.582263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.292 qpair failed and we were unable to recover it. 00:34:49.292 [2024-07-14 09:44:33.582453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.292 [2024-07-14 09:44:33.582478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.292 qpair failed and we were unable to recover it. 00:34:49.292 [2024-07-14 09:44:33.582666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.292 [2024-07-14 09:44:33.582691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.292 qpair failed and we were unable to recover it. 00:34:49.292 [2024-07-14 09:44:33.582842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.292 [2024-07-14 09:44:33.582875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.292 qpair failed and we were unable to recover it. 00:34:49.292 [2024-07-14 09:44:33.583144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.292 [2024-07-14 09:44:33.583171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.292 qpair failed and we were unable to recover it. 00:34:49.292 [2024-07-14 09:44:33.583350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.292 [2024-07-14 09:44:33.583376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.292 qpair failed and we were unable to recover it. 00:34:49.292 [2024-07-14 09:44:33.583580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.292 [2024-07-14 09:44:33.583611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.292 qpair failed and we were unable to recover it. 00:34:49.292 [2024-07-14 09:44:33.583774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.292 [2024-07-14 09:44:33.583800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.292 qpair failed and we were unable to recover it. 00:34:49.292 [2024-07-14 09:44:33.583979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.292 [2024-07-14 09:44:33.584005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.292 qpair failed and we were unable to recover it. 00:34:49.292 [2024-07-14 09:44:33.584168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.292 [2024-07-14 09:44:33.584194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.292 qpair failed and we were unable to recover it. 00:34:49.292 [2024-07-14 09:44:33.584373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.292 [2024-07-14 09:44:33.584399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.292 qpair failed and we were unable to recover it. 00:34:49.292 [2024-07-14 09:44:33.584583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.292 [2024-07-14 09:44:33.584608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.292 qpair failed and we were unable to recover it. 00:34:49.292 [2024-07-14 09:44:33.584772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.292 [2024-07-14 09:44:33.584798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.292 qpair failed and we were unable to recover it. 00:34:49.292 [2024-07-14 09:44:33.584961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.292 [2024-07-14 09:44:33.584987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.292 qpair failed and we were unable to recover it. 00:34:49.292 [2024-07-14 09:44:33.585167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.292 [2024-07-14 09:44:33.585192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.292 qpair failed and we were unable to recover it. 00:34:49.292 [2024-07-14 09:44:33.585407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.292 [2024-07-14 09:44:33.585433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.292 qpair failed and we were unable to recover it. 00:34:49.292 [2024-07-14 09:44:33.585623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.292 [2024-07-14 09:44:33.585648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.292 qpair failed and we were unable to recover it. 00:34:49.292 [2024-07-14 09:44:33.585803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.292 [2024-07-14 09:44:33.585829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.292 qpair failed and we were unable to recover it. 00:34:49.292 [2024-07-14 09:44:33.586023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.292 [2024-07-14 09:44:33.586049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.292 qpair failed and we were unable to recover it. 00:34:49.292 [2024-07-14 09:44:33.586239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.292 [2024-07-14 09:44:33.586264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.292 qpair failed and we were unable to recover it. 00:34:49.292 [2024-07-14 09:44:33.586446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.292 [2024-07-14 09:44:33.586474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.292 qpair failed and we were unable to recover it. 00:34:49.292 [2024-07-14 09:44:33.586634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.292 [2024-07-14 09:44:33.586660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.292 qpair failed and we were unable to recover it. 00:34:49.292 [2024-07-14 09:44:33.586875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.292 [2024-07-14 09:44:33.586901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.292 qpair failed and we were unable to recover it. 00:34:49.292 [2024-07-14 09:44:33.587094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.292 [2024-07-14 09:44:33.587120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.292 qpair failed and we were unable to recover it. 00:34:49.292 [2024-07-14 09:44:33.587306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.292 [2024-07-14 09:44:33.587332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.292 qpair failed and we were unable to recover it. 00:34:49.292 [2024-07-14 09:44:33.587583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.292 [2024-07-14 09:44:33.587609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.292 qpair failed and we were unable to recover it. 00:34:49.292 [2024-07-14 09:44:33.587803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.292 [2024-07-14 09:44:33.587828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.292 qpair failed and we were unable to recover it. 00:34:49.292 [2024-07-14 09:44:33.588004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.292 [2024-07-14 09:44:33.588030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.292 qpair failed and we were unable to recover it. 00:34:49.292 [2024-07-14 09:44:33.588224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.292 [2024-07-14 09:44:33.588250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.292 qpair failed and we were unable to recover it. 00:34:49.292 [2024-07-14 09:44:33.588452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.292 [2024-07-14 09:44:33.588478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.292 qpair failed and we were unable to recover it. 00:34:49.292 [2024-07-14 09:44:33.588646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.292 [2024-07-14 09:44:33.588671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.292 qpair failed and we were unable to recover it. 00:34:49.292 [2024-07-14 09:44:33.588839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.292 [2024-07-14 09:44:33.588871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.292 qpair failed and we were unable to recover it. 00:34:49.292 [2024-07-14 09:44:33.589064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.292 [2024-07-14 09:44:33.589090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.292 qpair failed and we were unable to recover it. 00:34:49.292 [2024-07-14 09:44:33.589247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.292 [2024-07-14 09:44:33.589281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.292 qpair failed and we were unable to recover it. 00:34:49.292 [2024-07-14 09:44:33.589478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.292 [2024-07-14 09:44:33.589504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.292 qpair failed and we were unable to recover it. 00:34:49.292 [2024-07-14 09:44:33.589690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.292 [2024-07-14 09:44:33.589715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.292 qpair failed and we were unable to recover it. 00:34:49.292 [2024-07-14 09:44:33.589887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.292 [2024-07-14 09:44:33.589913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.292 qpair failed and we were unable to recover it. 00:34:49.292 [2024-07-14 09:44:33.590131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.292 [2024-07-14 09:44:33.590157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.292 qpair failed and we were unable to recover it. 00:34:49.292 [2024-07-14 09:44:33.590324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.292 [2024-07-14 09:44:33.590349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.292 qpair failed and we were unable to recover it. 00:34:49.292 [2024-07-14 09:44:33.590520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.292 [2024-07-14 09:44:33.590547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.292 qpair failed and we were unable to recover it. 00:34:49.292 [2024-07-14 09:44:33.590705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.292 [2024-07-14 09:44:33.590730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.292 qpair failed and we were unable to recover it. 00:34:49.292 [2024-07-14 09:44:33.590889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.292 [2024-07-14 09:44:33.590915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.293 qpair failed and we were unable to recover it. 00:34:49.293 [2024-07-14 09:44:33.591110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.293 [2024-07-14 09:44:33.591135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.293 qpair failed and we were unable to recover it. 00:34:49.293 [2024-07-14 09:44:33.591296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.293 [2024-07-14 09:44:33.591321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.293 qpair failed and we were unable to recover it. 00:34:49.293 [2024-07-14 09:44:33.591517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.293 [2024-07-14 09:44:33.591543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.293 qpair failed and we were unable to recover it. 00:34:49.293 [2024-07-14 09:44:33.591732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.293 [2024-07-14 09:44:33.591757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.293 qpair failed and we were unable to recover it. 00:34:49.293 [2024-07-14 09:44:33.591942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.293 [2024-07-14 09:44:33.591985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.293 qpair failed and we were unable to recover it. 00:34:49.293 [2024-07-14 09:44:33.592183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.293 [2024-07-14 09:44:33.592211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.293 qpair failed and we were unable to recover it. 00:34:49.293 [2024-07-14 09:44:33.592409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.293 [2024-07-14 09:44:33.592435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.293 qpair failed and we were unable to recover it. 00:34:49.293 [2024-07-14 09:44:33.592599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.293 [2024-07-14 09:44:33.592626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.293 qpair failed and we were unable to recover it. 00:34:49.293 [2024-07-14 09:44:33.592812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.293 [2024-07-14 09:44:33.592838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.293 qpair failed and we were unable to recover it. 00:34:49.293 [2024-07-14 09:44:33.593040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.293 [2024-07-14 09:44:33.593068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.293 qpair failed and we were unable to recover it. 00:34:49.293 [2024-07-14 09:44:33.593242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.293 [2024-07-14 09:44:33.593270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.293 qpair failed and we were unable to recover it. 00:34:49.293 [2024-07-14 09:44:33.593528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.293 [2024-07-14 09:44:33.593554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.293 qpair failed and we were unable to recover it. 00:34:49.293 [2024-07-14 09:44:33.593747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.293 [2024-07-14 09:44:33.593773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.293 qpair failed and we were unable to recover it. 00:34:49.293 [2024-07-14 09:44:33.593980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.293 [2024-07-14 09:44:33.594007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.293 qpair failed and we were unable to recover it. 00:34:49.293 [2024-07-14 09:44:33.594195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.293 [2024-07-14 09:44:33.594221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.293 qpair failed and we were unable to recover it. 00:34:49.293 [2024-07-14 09:44:33.594378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.293 [2024-07-14 09:44:33.594404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.293 qpair failed and we were unable to recover it. 00:34:49.293 [2024-07-14 09:44:33.594590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.293 [2024-07-14 09:44:33.594615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.293 qpair failed and we were unable to recover it. 00:34:49.293 [2024-07-14 09:44:33.594780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.293 [2024-07-14 09:44:33.594805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.293 qpair failed and we were unable to recover it. 00:34:49.293 [2024-07-14 09:44:33.594966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.293 [2024-07-14 09:44:33.594996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.293 qpair failed and we were unable to recover it. 00:34:49.293 [2024-07-14 09:44:33.595184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.293 [2024-07-14 09:44:33.595210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.293 qpair failed and we were unable to recover it. 00:34:49.293 [2024-07-14 09:44:33.595497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.293 [2024-07-14 09:44:33.595522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.293 qpair failed and we were unable to recover it. 00:34:49.293 [2024-07-14 09:44:33.595722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.293 [2024-07-14 09:44:33.595747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.293 qpair failed and we were unable to recover it. 00:34:49.293 [2024-07-14 09:44:33.595912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.293 [2024-07-14 09:44:33.595938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.293 qpair failed and we were unable to recover it. 00:34:49.293 [2024-07-14 09:44:33.596123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.293 [2024-07-14 09:44:33.596148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.293 qpair failed and we were unable to recover it. 00:34:49.293 [2024-07-14 09:44:33.596458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.293 [2024-07-14 09:44:33.596484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.293 qpair failed and we were unable to recover it. 00:34:49.293 [2024-07-14 09:44:33.596638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.293 [2024-07-14 09:44:33.596663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.293 qpair failed and we were unable to recover it. 00:34:49.293 [2024-07-14 09:44:33.596874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.293 [2024-07-14 09:44:33.596900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.293 qpair failed and we were unable to recover it. 00:34:49.293 [2024-07-14 09:44:33.597086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.293 [2024-07-14 09:44:33.597111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.293 qpair failed and we were unable to recover it. 00:34:49.293 [2024-07-14 09:44:33.597276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.293 [2024-07-14 09:44:33.597301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.293 qpair failed and we were unable to recover it. 00:34:49.293 [2024-07-14 09:44:33.597491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.293 [2024-07-14 09:44:33.597516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.293 qpair failed and we were unable to recover it. 00:34:49.293 [2024-07-14 09:44:33.597716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.293 [2024-07-14 09:44:33.597741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.293 qpair failed and we were unable to recover it. 00:34:49.293 [2024-07-14 09:44:33.597906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.293 [2024-07-14 09:44:33.597932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.293 qpair failed and we were unable to recover it. 00:34:49.293 [2024-07-14 09:44:33.598104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.293 [2024-07-14 09:44:33.598130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.293 qpair failed and we were unable to recover it. 00:34:49.293 [2024-07-14 09:44:33.598290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.293 [2024-07-14 09:44:33.598315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.293 qpair failed and we were unable to recover it. 00:34:49.293 [2024-07-14 09:44:33.598501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.293 [2024-07-14 09:44:33.598526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.293 qpair failed and we were unable to recover it. 00:34:49.293 [2024-07-14 09:44:33.598682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.293 [2024-07-14 09:44:33.598707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.293 qpair failed and we were unable to recover it. 00:34:49.293 [2024-07-14 09:44:33.598908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.293 [2024-07-14 09:44:33.598934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.293 qpair failed and we were unable to recover it. 00:34:49.293 [2024-07-14 09:44:33.599090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.293 [2024-07-14 09:44:33.599116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.293 qpair failed and we were unable to recover it. 00:34:49.293 [2024-07-14 09:44:33.599283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.293 [2024-07-14 09:44:33.599308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.293 qpair failed and we were unable to recover it. 00:34:49.293 [2024-07-14 09:44:33.599611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.293 [2024-07-14 09:44:33.599637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.293 qpair failed and we were unable to recover it. 00:34:49.293 [2024-07-14 09:44:33.599821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.293 [2024-07-14 09:44:33.599846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.293 qpair failed and we were unable to recover it. 00:34:49.294 [2024-07-14 09:44:33.600055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.294 [2024-07-14 09:44:33.600080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.294 qpair failed and we were unable to recover it. 00:34:49.294 [2024-07-14 09:44:33.600228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.294 [2024-07-14 09:44:33.600254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.294 qpair failed and we were unable to recover it. 00:34:49.294 [2024-07-14 09:44:33.600450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.294 [2024-07-14 09:44:33.600475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.294 qpair failed and we were unable to recover it. 00:34:49.294 [2024-07-14 09:44:33.600671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.294 [2024-07-14 09:44:33.600697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.294 qpair failed and we were unable to recover it. 00:34:49.294 [2024-07-14 09:44:33.600870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.294 [2024-07-14 09:44:33.600896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.294 qpair failed and we were unable to recover it. 00:34:49.294 [2024-07-14 09:44:33.601058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.294 [2024-07-14 09:44:33.601084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.294 qpair failed and we were unable to recover it. 00:34:49.294 [2024-07-14 09:44:33.601259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.294 [2024-07-14 09:44:33.601285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.294 qpair failed and we were unable to recover it. 00:34:49.294 [2024-07-14 09:44:33.601487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.294 [2024-07-14 09:44:33.601512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.294 qpair failed and we were unable to recover it. 00:34:49.294 [2024-07-14 09:44:33.601678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.294 [2024-07-14 09:44:33.601706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.294 qpair failed and we were unable to recover it. 00:34:49.294 [2024-07-14 09:44:33.601863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.294 [2024-07-14 09:44:33.601895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.294 qpair failed and we were unable to recover it. 00:34:49.294 [2024-07-14 09:44:33.602084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.294 [2024-07-14 09:44:33.602109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.294 qpair failed and we were unable to recover it. 00:34:49.294 [2024-07-14 09:44:33.602269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.294 [2024-07-14 09:44:33.602296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.294 qpair failed and we were unable to recover it. 00:34:49.294 [2024-07-14 09:44:33.602481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.294 [2024-07-14 09:44:33.602506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.294 qpair failed and we were unable to recover it. 00:34:49.294 [2024-07-14 09:44:33.602676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.294 [2024-07-14 09:44:33.602702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff6600 with addr=10.0.0.2, port=4420 00:34:49.294 qpair failed and we were unable to recover it. 00:34:49.294 [2024-07-14 09:44:33.602881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.294 [2024-07-14 09:44:33.602925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.294 qpair failed and we were unable to recover it. 00:34:49.294 [2024-07-14 09:44:33.603118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.294 [2024-07-14 09:44:33.603146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.294 qpair failed and we were unable to recover it. 00:34:49.294 [2024-07-14 09:44:33.603309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.294 [2024-07-14 09:44:33.603337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.294 qpair failed and we were unable to recover it. 00:34:49.294 [2024-07-14 09:44:33.603532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.294 [2024-07-14 09:44:33.603558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.294 qpair failed and we were unable to recover it. 00:34:49.294 [2024-07-14 09:44:33.603815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.294 [2024-07-14 09:44:33.603847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.294 qpair failed and we were unable to recover it. 00:34:49.294 [2024-07-14 09:44:33.604057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.294 [2024-07-14 09:44:33.604084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.294 qpair failed and we were unable to recover it. 00:34:49.294 [2024-07-14 09:44:33.604297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.294 [2024-07-14 09:44:33.604323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.294 qpair failed and we were unable to recover it. 00:34:49.294 [2024-07-14 09:44:33.604486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.294 [2024-07-14 09:44:33.604514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.294 qpair failed and we were unable to recover it. 00:34:49.294 [2024-07-14 09:44:33.604678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.294 [2024-07-14 09:44:33.604705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.294 qpair failed and we were unable to recover it. 00:34:49.294 [2024-07-14 09:44:33.604929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.294 [2024-07-14 09:44:33.604957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.294 qpair failed and we were unable to recover it. 00:34:49.294 [2024-07-14 09:44:33.605145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.294 [2024-07-14 09:44:33.605172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.294 qpair failed and we were unable to recover it. 00:34:49.294 [2024-07-14 09:44:33.605369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.294 [2024-07-14 09:44:33.605396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.294 qpair failed and we were unable to recover it. 00:34:49.294 [2024-07-14 09:44:33.605657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.294 [2024-07-14 09:44:33.605683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.294 qpair failed and we were unable to recover it. 00:34:49.294 [2024-07-14 09:44:33.605854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.294 [2024-07-14 09:44:33.605888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.294 qpair failed and we were unable to recover it. 00:34:49.294 [2024-07-14 09:44:33.606067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.294 [2024-07-14 09:44:33.606092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.294 qpair failed and we were unable to recover it. 00:34:49.294 [2024-07-14 09:44:33.606286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.294 [2024-07-14 09:44:33.606314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.294 qpair failed and we were unable to recover it. 00:34:49.294 [2024-07-14 09:44:33.606515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.294 [2024-07-14 09:44:33.606542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.294 qpair failed and we were unable to recover it. 00:34:49.294 [2024-07-14 09:44:33.606740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.294 [2024-07-14 09:44:33.606766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.294 qpair failed and we were unable to recover it. 00:34:49.294 [2024-07-14 09:44:33.606961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.294 [2024-07-14 09:44:33.606989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.294 qpair failed and we were unable to recover it. 00:34:49.294 [2024-07-14 09:44:33.607191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.294 [2024-07-14 09:44:33.607217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.294 qpair failed and we were unable to recover it. 00:34:49.294 [2024-07-14 09:44:33.607404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.294 [2024-07-14 09:44:33.607430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.294 qpair failed and we were unable to recover it. 00:34:49.294 [2024-07-14 09:44:33.607616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.294 [2024-07-14 09:44:33.607642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.294 qpair failed and we were unable to recover it. 00:34:49.294 [2024-07-14 09:44:33.607852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.294 [2024-07-14 09:44:33.607884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.294 qpair failed and we were unable to recover it. 00:34:49.294 [2024-07-14 09:44:33.608077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.294 [2024-07-14 09:44:33.608103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.294 qpair failed and we were unable to recover it. 00:34:49.294 [2024-07-14 09:44:33.608289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.294 [2024-07-14 09:44:33.608315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.294 qpair failed and we were unable to recover it. 00:34:49.294 [2024-07-14 09:44:33.608511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.294 [2024-07-14 09:44:33.608537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.294 qpair failed and we were unable to recover it. 00:34:49.294 [2024-07-14 09:44:33.608727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.294 [2024-07-14 09:44:33.608753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.294 qpair failed and we were unable to recover it. 00:34:49.294 [2024-07-14 09:44:33.608918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.295 [2024-07-14 09:44:33.608945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.295 qpair failed and we were unable to recover it. 00:34:49.295 [2024-07-14 09:44:33.609125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.295 [2024-07-14 09:44:33.609151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.295 qpair failed and we were unable to recover it. 00:34:49.295 [2024-07-14 09:44:33.609319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.295 [2024-07-14 09:44:33.609346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.295 qpair failed and we were unable to recover it. 00:34:49.295 [2024-07-14 09:44:33.609544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.295 [2024-07-14 09:44:33.609571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.295 qpair failed and we were unable to recover it. 00:34:49.295 [2024-07-14 09:44:33.609747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.295 [2024-07-14 09:44:33.609774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.295 qpair failed and we were unable to recover it. 00:34:49.295 [2024-07-14 09:44:33.609961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.295 [2024-07-14 09:44:33.609988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.295 qpair failed and we were unable to recover it. 00:34:49.295 [2024-07-14 09:44:33.610164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.295 [2024-07-14 09:44:33.610191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.295 qpair failed and we were unable to recover it. 00:34:49.295 [2024-07-14 09:44:33.610357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.295 [2024-07-14 09:44:33.610383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.295 qpair failed and we were unable to recover it. 00:34:49.295 [2024-07-14 09:44:33.610543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.295 [2024-07-14 09:44:33.610569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.295 qpair failed and we were unable to recover it. 00:34:49.295 [2024-07-14 09:44:33.610732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.295 [2024-07-14 09:44:33.610757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.295 qpair failed and we were unable to recover it. 00:34:49.295 [2024-07-14 09:44:33.610931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.295 [2024-07-14 09:44:33.610959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.295 qpair failed and we were unable to recover it. 00:34:49.295 [2024-07-14 09:44:33.611161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.295 [2024-07-14 09:44:33.611188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.295 qpair failed and we were unable to recover it. 00:34:49.295 [2024-07-14 09:44:33.611343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.295 [2024-07-14 09:44:33.611369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.295 qpair failed and we were unable to recover it. 00:34:49.295 [2024-07-14 09:44:33.611559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.295 [2024-07-14 09:44:33.611585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.295 qpair failed and we were unable to recover it. 00:34:49.295 [2024-07-14 09:44:33.611777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.295 [2024-07-14 09:44:33.611803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.295 qpair failed and we were unable to recover it. 00:34:49.295 [2024-07-14 09:44:33.611960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.295 [2024-07-14 09:44:33.611987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.295 qpair failed and we were unable to recover it. 00:34:49.295 [2024-07-14 09:44:33.612185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.295 [2024-07-14 09:44:33.612211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.295 qpair failed and we were unable to recover it. 00:34:49.295 [2024-07-14 09:44:33.612398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.295 [2024-07-14 09:44:33.612429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.295 qpair failed and we were unable to recover it. 00:34:49.295 [2024-07-14 09:44:33.612614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.295 [2024-07-14 09:44:33.612640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.295 qpair failed and we were unable to recover it. 00:34:49.295 [2024-07-14 09:44:33.612830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.295 [2024-07-14 09:44:33.612857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.295 qpair failed and we were unable to recover it. 00:34:49.295 [2024-07-14 09:44:33.613071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.295 [2024-07-14 09:44:33.613098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.295 qpair failed and we were unable to recover it. 00:34:49.295 [2024-07-14 09:44:33.613317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.295 [2024-07-14 09:44:33.613343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.295 qpair failed and we were unable to recover it. 00:34:49.295 [2024-07-14 09:44:33.613535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.295 [2024-07-14 09:44:33.613561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.295 qpair failed and we were unable to recover it. 00:34:49.295 [2024-07-14 09:44:33.613735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.295 [2024-07-14 09:44:33.613761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.295 qpair failed and we were unable to recover it. 00:34:49.295 [2024-07-14 09:44:33.613932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.295 [2024-07-14 09:44:33.613960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.295 qpair failed and we were unable to recover it. 00:34:49.295 [2024-07-14 09:44:33.614195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.295 [2024-07-14 09:44:33.614222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.295 qpair failed and we were unable to recover it. 00:34:49.295 [2024-07-14 09:44:33.614409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.295 [2024-07-14 09:44:33.614435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.295 qpair failed and we were unable to recover it. 00:34:49.295 [2024-07-14 09:44:33.614629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.295 [2024-07-14 09:44:33.614656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.295 qpair failed and we were unable to recover it. 00:34:49.295 [2024-07-14 09:44:33.614815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.295 [2024-07-14 09:44:33.614841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.295 qpair failed and we were unable to recover it. 00:34:49.295 [2024-07-14 09:44:33.615039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.295 [2024-07-14 09:44:33.615066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.295 qpair failed and we were unable to recover it. 00:34:49.295 [2024-07-14 09:44:33.615253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.295 [2024-07-14 09:44:33.615280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.295 qpair failed and we were unable to recover it. 00:34:49.295 [2024-07-14 09:44:33.615474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.295 [2024-07-14 09:44:33.615500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.295 qpair failed and we were unable to recover it. 00:34:49.295 [2024-07-14 09:44:33.615700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.295 [2024-07-14 09:44:33.615727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.295 qpair failed and we were unable to recover it. 00:34:49.295 [2024-07-14 09:44:33.615901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.295 [2024-07-14 09:44:33.615929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.295 qpair failed and we were unable to recover it. 00:34:49.295 [2024-07-14 09:44:33.616124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.295 [2024-07-14 09:44:33.616152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.295 qpair failed and we were unable to recover it. 00:34:49.295 [2024-07-14 09:44:33.616312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.295 [2024-07-14 09:44:33.616340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.295 qpair failed and we were unable to recover it. 00:34:49.295 [2024-07-14 09:44:33.616524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.296 [2024-07-14 09:44:33.616551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.296 qpair failed and we were unable to recover it. 00:34:49.296 [2024-07-14 09:44:33.616737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.296 [2024-07-14 09:44:33.616763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.296 qpair failed and we were unable to recover it. 00:34:49.296 [2024-07-14 09:44:33.616933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.296 [2024-07-14 09:44:33.616961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.296 qpair failed and we were unable to recover it. 00:34:49.296 [2024-07-14 09:44:33.617144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.296 [2024-07-14 09:44:33.617170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.296 qpair failed and we were unable to recover it. 00:34:49.296 [2024-07-14 09:44:33.617368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.296 [2024-07-14 09:44:33.617394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.296 qpair failed and we were unable to recover it. 00:34:49.296 [2024-07-14 09:44:33.617583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.296 [2024-07-14 09:44:33.617609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.296 qpair failed and we were unable to recover it. 00:34:49.296 [2024-07-14 09:44:33.617783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.296 [2024-07-14 09:44:33.617809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.296 qpair failed and we were unable to recover it. 00:34:49.296 [2024-07-14 09:44:33.618023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.296 [2024-07-14 09:44:33.618050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.296 qpair failed and we were unable to recover it. 00:34:49.296 [2024-07-14 09:44:33.618239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.296 [2024-07-14 09:44:33.618266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.296 qpair failed and we were unable to recover it. 00:34:49.296 [2024-07-14 09:44:33.618435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.296 [2024-07-14 09:44:33.618462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.296 qpair failed and we were unable to recover it. 00:34:49.296 [2024-07-14 09:44:33.618628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.296 [2024-07-14 09:44:33.618654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.296 qpair failed and we were unable to recover it. 00:34:49.296 [2024-07-14 09:44:33.618968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.296 [2024-07-14 09:44:33.618996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.296 qpair failed and we were unable to recover it. 00:34:49.296 [2024-07-14 09:44:33.619179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.296 [2024-07-14 09:44:33.619205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.296 qpair failed and we were unable to recover it. 00:34:49.296 [2024-07-14 09:44:33.619361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.296 [2024-07-14 09:44:33.619388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.296 qpair failed and we were unable to recover it. 00:34:49.296 [2024-07-14 09:44:33.619555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.296 [2024-07-14 09:44:33.619581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.296 qpair failed and we were unable to recover it. 00:34:49.296 [2024-07-14 09:44:33.619800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.296 [2024-07-14 09:44:33.619827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.296 qpair failed and we were unable to recover it. 00:34:49.296 [2024-07-14 09:44:33.620053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.296 [2024-07-14 09:44:33.620080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.296 qpair failed and we were unable to recover it. 00:34:49.296 [2024-07-14 09:44:33.620243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.296 [2024-07-14 09:44:33.620269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.296 qpair failed and we were unable to recover it. 00:34:49.296 [2024-07-14 09:44:33.620478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.296 [2024-07-14 09:44:33.620504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.296 qpair failed and we were unable to recover it. 00:34:49.296 [2024-07-14 09:44:33.620687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.296 [2024-07-14 09:44:33.620713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.296 qpair failed and we were unable to recover it. 00:34:49.296 [2024-07-14 09:44:33.620882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.296 [2024-07-14 09:44:33.620909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.296 qpair failed and we were unable to recover it. 00:34:49.296 [2024-07-14 09:44:33.621094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.296 [2024-07-14 09:44:33.621124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.296 qpair failed and we were unable to recover it. 00:34:49.296 [2024-07-14 09:44:33.621314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.296 [2024-07-14 09:44:33.621341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.296 qpair failed and we were unable to recover it. 00:34:49.296 [2024-07-14 09:44:33.621507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.296 [2024-07-14 09:44:33.621533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.296 qpair failed and we were unable to recover it. 00:34:49.296 [2024-07-14 09:44:33.621727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.296 [2024-07-14 09:44:33.621754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.296 qpair failed and we were unable to recover it. 00:34:49.296 [2024-07-14 09:44:33.621933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.296 [2024-07-14 09:44:33.621961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.296 qpair failed and we were unable to recover it. 00:34:49.296 [2024-07-14 09:44:33.622193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.296 [2024-07-14 09:44:33.622219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.296 qpair failed and we were unable to recover it. 00:34:49.296 [2024-07-14 09:44:33.622443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.296 [2024-07-14 09:44:33.622470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.296 qpair failed and we were unable to recover it. 00:34:49.296 [2024-07-14 09:44:33.622658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.296 [2024-07-14 09:44:33.622684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.296 qpair failed and we were unable to recover it. 00:34:49.296 [2024-07-14 09:44:33.622878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.296 [2024-07-14 09:44:33.622911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.296 qpair failed and we were unable to recover it. 00:34:49.296 [2024-07-14 09:44:33.623100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.296 [2024-07-14 09:44:33.623127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.296 qpair failed and we were unable to recover it. 00:34:49.296 [2024-07-14 09:44:33.623318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.296 [2024-07-14 09:44:33.623344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.296 qpair failed and we were unable to recover it. 00:34:49.296 [2024-07-14 09:44:33.623533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.296 [2024-07-14 09:44:33.623559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.296 qpair failed and we were unable to recover it. 00:34:49.296 [2024-07-14 09:44:33.623735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.296 [2024-07-14 09:44:33.623761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.296 qpair failed and we were unable to recover it. 00:34:49.296 [2024-07-14 09:44:33.623969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.296 [2024-07-14 09:44:33.623996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.296 qpair failed and we were unable to recover it. 00:34:49.296 [2024-07-14 09:44:33.624194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.296 [2024-07-14 09:44:33.624220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.296 qpair failed and we were unable to recover it. 00:34:49.296 [2024-07-14 09:44:33.624427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.296 [2024-07-14 09:44:33.624454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.296 qpair failed and we were unable to recover it. 00:34:49.296 [2024-07-14 09:44:33.624641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.296 [2024-07-14 09:44:33.624667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.296 qpair failed and we were unable to recover it. 00:34:49.296 [2024-07-14 09:44:33.624845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.296 [2024-07-14 09:44:33.624879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.296 qpair failed and we were unable to recover it. 00:34:49.296 [2024-07-14 09:44:33.625075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.296 [2024-07-14 09:44:33.625101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.296 qpair failed and we were unable to recover it. 00:34:49.296 [2024-07-14 09:44:33.625281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.296 [2024-07-14 09:44:33.625307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.296 qpair failed and we were unable to recover it. 00:34:49.296 [2024-07-14 09:44:33.625503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.297 [2024-07-14 09:44:33.625531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.297 qpair failed and we were unable to recover it. 00:34:49.297 [2024-07-14 09:44:33.625694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.297 [2024-07-14 09:44:33.625720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.297 qpair failed and we were unable to recover it. 00:34:49.297 [2024-07-14 09:44:33.625887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.297 [2024-07-14 09:44:33.625914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.297 qpair failed and we were unable to recover it. 00:34:49.297 [2024-07-14 09:44:33.626075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.297 [2024-07-14 09:44:33.626101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.297 qpair failed and we were unable to recover it. 00:34:49.297 [2024-07-14 09:44:33.626293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.297 [2024-07-14 09:44:33.626321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.297 qpair failed and we were unable to recover it. 00:34:49.297 [2024-07-14 09:44:33.626517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.297 [2024-07-14 09:44:33.626544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.297 qpair failed and we were unable to recover it. 00:34:49.297 [2024-07-14 09:44:33.626712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.297 [2024-07-14 09:44:33.626739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.297 qpair failed and we were unable to recover it. 00:34:49.297 [2024-07-14 09:44:33.626916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.297 [2024-07-14 09:44:33.626944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.297 qpair failed and we were unable to recover it. 00:34:49.297 [2024-07-14 09:44:33.627136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.297 [2024-07-14 09:44:33.627163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.297 qpair failed and we were unable to recover it. 00:34:49.297 [2024-07-14 09:44:33.627358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.297 [2024-07-14 09:44:33.627384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.297 qpair failed and we were unable to recover it. 00:34:49.297 [2024-07-14 09:44:33.627580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.297 [2024-07-14 09:44:33.627607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.297 qpair failed and we were unable to recover it. 00:34:49.297 [2024-07-14 09:44:33.627781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.297 [2024-07-14 09:44:33.627807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.297 qpair failed and we were unable to recover it. 00:34:49.297 [2024-07-14 09:44:33.628025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.297 [2024-07-14 09:44:33.628052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.297 qpair failed and we were unable to recover it. 00:34:49.297 [2024-07-14 09:44:33.628344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.297 [2024-07-14 09:44:33.628370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.297 qpair failed and we were unable to recover it. 00:34:49.297 [2024-07-14 09:44:33.628567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.297 [2024-07-14 09:44:33.628593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.297 qpair failed and we were unable to recover it. 00:34:49.297 [2024-07-14 09:44:33.628781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.297 [2024-07-14 09:44:33.628807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.297 qpair failed and we were unable to recover it. 00:34:49.297 [2024-07-14 09:44:33.628965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.297 [2024-07-14 09:44:33.628993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.297 qpair failed and we were unable to recover it. 00:34:49.297 [2024-07-14 09:44:33.629183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.297 [2024-07-14 09:44:33.629209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.297 qpair failed and we were unable to recover it. 00:34:49.297 [2024-07-14 09:44:33.629408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.297 [2024-07-14 09:44:33.629435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.297 qpair failed and we were unable to recover it. 00:34:49.297 [2024-07-14 09:44:33.629644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.297 [2024-07-14 09:44:33.629671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.297 qpair failed and we were unable to recover it. 00:34:49.297 [2024-07-14 09:44:33.629901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.297 [2024-07-14 09:44:33.629933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.297 qpair failed and we were unable to recover it. 00:34:49.297 [2024-07-14 09:44:33.630096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.297 [2024-07-14 09:44:33.630123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.297 qpair failed and we were unable to recover it. 00:34:49.297 [2024-07-14 09:44:33.630311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.297 [2024-07-14 09:44:33.630337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.297 qpair failed and we were unable to recover it. 00:34:49.297 [2024-07-14 09:44:33.630529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.297 [2024-07-14 09:44:33.630556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.297 qpair failed and we were unable to recover it. 00:34:49.297 [2024-07-14 09:44:33.630767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.297 [2024-07-14 09:44:33.630793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.297 qpair failed and we were unable to recover it. 00:34:49.297 [2024-07-14 09:44:33.630971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.297 [2024-07-14 09:44:33.630998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.297 qpair failed and we were unable to recover it. 00:34:49.297 [2024-07-14 09:44:33.631186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.297 [2024-07-14 09:44:33.631213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.297 qpair failed and we were unable to recover it. 00:34:49.297 [2024-07-14 09:44:33.631428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.297 [2024-07-14 09:44:33.631455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.297 qpair failed and we were unable to recover it. 00:34:49.297 [2024-07-14 09:44:33.631620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.297 [2024-07-14 09:44:33.631647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.297 qpair failed and we were unable to recover it. 00:34:49.297 [2024-07-14 09:44:33.631838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.297 [2024-07-14 09:44:33.631871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.297 qpair failed and we were unable to recover it. 00:34:49.297 [2024-07-14 09:44:33.632055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.297 [2024-07-14 09:44:33.632081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.297 qpair failed and we were unable to recover it. 00:34:49.297 [2024-07-14 09:44:33.632269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.297 [2024-07-14 09:44:33.632295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.297 qpair failed and we were unable to recover it. 00:34:49.297 [2024-07-14 09:44:33.632473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.297 [2024-07-14 09:44:33.632500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.297 qpair failed and we were unable to recover it. 00:34:49.297 [2024-07-14 09:44:33.632685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.297 [2024-07-14 09:44:33.632711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.297 qpair failed and we were unable to recover it. 00:34:49.297 [2024-07-14 09:44:33.632904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.297 [2024-07-14 09:44:33.632932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.297 qpair failed and we were unable to recover it. 00:34:49.297 [2024-07-14 09:44:33.633121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.297 [2024-07-14 09:44:33.633147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.297 qpair failed and we were unable to recover it. 00:34:49.297 [2024-07-14 09:44:33.633330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.297 [2024-07-14 09:44:33.633356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.297 qpair failed and we were unable to recover it. 00:34:49.297 [2024-07-14 09:44:33.633546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.297 [2024-07-14 09:44:33.633573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.297 qpair failed and we were unable to recover it. 00:34:49.297 [2024-07-14 09:44:33.633740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.297 [2024-07-14 09:44:33.633768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.297 qpair failed and we were unable to recover it. 00:34:49.297 [2024-07-14 09:44:33.633930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.297 [2024-07-14 09:44:33.633957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.297 qpair failed and we were unable to recover it. 00:34:49.297 [2024-07-14 09:44:33.634166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.297 [2024-07-14 09:44:33.634192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.297 qpair failed and we were unable to recover it. 00:34:49.297 [2024-07-14 09:44:33.634353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.298 [2024-07-14 09:44:33.634379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.298 qpair failed and we were unable to recover it. 00:34:49.298 [2024-07-14 09:44:33.634564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.298 [2024-07-14 09:44:33.634590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.298 qpair failed and we were unable to recover it. 00:34:49.298 [2024-07-14 09:44:33.634772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.298 [2024-07-14 09:44:33.634798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.298 qpair failed and we were unable to recover it. 00:34:49.298 [2024-07-14 09:44:33.634981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.298 [2024-07-14 09:44:33.635007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.298 qpair failed and we were unable to recover it. 00:34:49.298 [2024-07-14 09:44:33.635194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.298 [2024-07-14 09:44:33.635221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.298 qpair failed and we were unable to recover it. 00:34:49.298 [2024-07-14 09:44:33.635380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.298 [2024-07-14 09:44:33.635406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.298 qpair failed and we were unable to recover it. 00:34:49.298 [2024-07-14 09:44:33.635594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.298 [2024-07-14 09:44:33.635620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.298 qpair failed and we were unable to recover it. 00:34:49.298 [2024-07-14 09:44:33.635817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.298 [2024-07-14 09:44:33.635843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.298 qpair failed and we were unable to recover it. 00:34:49.298 [2024-07-14 09:44:33.636004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.298 [2024-07-14 09:44:33.636031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.298 qpair failed and we were unable to recover it. 00:34:49.298 [2024-07-14 09:44:33.636200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.298 [2024-07-14 09:44:33.636226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.298 qpair failed and we were unable to recover it. 00:34:49.298 [2024-07-14 09:44:33.636410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.298 [2024-07-14 09:44:33.636436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.298 qpair failed and we were unable to recover it. 00:34:49.298 [2024-07-14 09:44:33.636596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.298 [2024-07-14 09:44:33.636623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.298 qpair failed and we were unable to recover it. 00:34:49.298 [2024-07-14 09:44:33.636808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.298 [2024-07-14 09:44:33.636834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.298 qpair failed and we were unable to recover it. 00:34:49.298 [2024-07-14 09:44:33.637023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.298 [2024-07-14 09:44:33.637050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.298 qpair failed and we were unable to recover it. 00:34:49.298 [2024-07-14 09:44:33.637238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.298 [2024-07-14 09:44:33.637264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.298 qpair failed and we were unable to recover it. 00:34:49.298 [2024-07-14 09:44:33.637425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.298 [2024-07-14 09:44:33.637453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.298 qpair failed and we were unable to recover it. 00:34:49.298 [2024-07-14 09:44:33.637649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.298 [2024-07-14 09:44:33.637675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.298 qpair failed and we were unable to recover it. 00:34:49.298 [2024-07-14 09:44:33.637831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.298 [2024-07-14 09:44:33.637857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.298 qpair failed and we were unable to recover it. 00:34:49.298 [2024-07-14 09:44:33.638099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.298 [2024-07-14 09:44:33.638126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.298 qpair failed and we were unable to recover it. 00:34:49.298 [2024-07-14 09:44:33.638310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.298 [2024-07-14 09:44:33.638341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.298 qpair failed and we were unable to recover it. 00:34:49.298 [2024-07-14 09:44:33.638531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.298 [2024-07-14 09:44:33.638557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.298 qpair failed and we were unable to recover it. 00:34:49.298 [2024-07-14 09:44:33.638710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.298 [2024-07-14 09:44:33.638736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.298 qpair failed and we were unable to recover it. 00:34:49.298 [2024-07-14 09:44:33.638948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.298 [2024-07-14 09:44:33.638975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.298 qpair failed and we were unable to recover it. 00:34:49.298 [2024-07-14 09:44:33.639169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.298 [2024-07-14 09:44:33.639196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.298 qpair failed and we were unable to recover it. 00:34:49.298 [2024-07-14 09:44:33.639389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.298 [2024-07-14 09:44:33.639417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.298 qpair failed and we were unable to recover it. 00:34:49.298 [2024-07-14 09:44:33.639634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.298 [2024-07-14 09:44:33.639660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.298 qpair failed and we were unable to recover it. 00:34:49.298 [2024-07-14 09:44:33.639851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.298 [2024-07-14 09:44:33.639882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.298 qpair failed and we were unable to recover it. 00:34:49.298 [2024-07-14 09:44:33.640072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.298 [2024-07-14 09:44:33.640098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.298 qpair failed and we were unable to recover it. 00:34:49.298 [2024-07-14 09:44:33.640289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.298 [2024-07-14 09:44:33.640316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.298 qpair failed and we were unable to recover it. 00:34:49.298 [2024-07-14 09:44:33.640473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.298 [2024-07-14 09:44:33.640499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.298 qpair failed and we were unable to recover it. 00:34:49.298 [2024-07-14 09:44:33.640692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.298 [2024-07-14 09:44:33.640719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.298 qpair failed and we were unable to recover it. 00:34:49.298 [2024-07-14 09:44:33.640903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.298 [2024-07-14 09:44:33.640930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.298 qpair failed and we were unable to recover it. 00:34:49.298 [2024-07-14 09:44:33.641090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.298 [2024-07-14 09:44:33.641116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.298 qpair failed and we were unable to recover it. 00:34:49.298 [2024-07-14 09:44:33.641302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.298 [2024-07-14 09:44:33.641328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.298 qpair failed and we were unable to recover it. 00:34:49.298 [2024-07-14 09:44:33.641482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.298 [2024-07-14 09:44:33.641509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.298 qpair failed and we were unable to recover it. 00:34:49.298 [2024-07-14 09:44:33.641694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.298 [2024-07-14 09:44:33.641720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.298 qpair failed and we were unable to recover it. 00:34:49.298 [2024-07-14 09:44:33.641910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.298 [2024-07-14 09:44:33.641936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.298 qpair failed and we were unable to recover it. 00:34:49.298 [2024-07-14 09:44:33.642119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.298 [2024-07-14 09:44:33.642146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.298 qpair failed and we were unable to recover it. 00:34:49.298 [2024-07-14 09:44:33.642337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.298 [2024-07-14 09:44:33.642364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.298 qpair failed and we were unable to recover it. 00:34:49.298 [2024-07-14 09:44:33.642632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.298 [2024-07-14 09:44:33.642658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.298 qpair failed and we were unable to recover it. 00:34:49.298 [2024-07-14 09:44:33.642845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.298 [2024-07-14 09:44:33.642879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.298 qpair failed and we were unable to recover it. 00:34:49.298 [2024-07-14 09:44:33.643074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.299 [2024-07-14 09:44:33.643101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.299 qpair failed and we were unable to recover it. 00:34:49.299 [2024-07-14 09:44:33.643287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.299 [2024-07-14 09:44:33.643313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.299 qpair failed and we were unable to recover it. 00:34:49.299 [2024-07-14 09:44:33.643512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.299 [2024-07-14 09:44:33.643539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.299 qpair failed and we were unable to recover it. 00:34:49.299 [2024-07-14 09:44:33.643731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.299 [2024-07-14 09:44:33.643757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.299 qpair failed and we were unable to recover it. 00:34:49.299 [2024-07-14 09:44:33.643931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.299 [2024-07-14 09:44:33.643958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.299 qpair failed and we were unable to recover it. 00:34:49.299 [2024-07-14 09:44:33.644156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.299 [2024-07-14 09:44:33.644183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.299 qpair failed and we were unable to recover it. 00:34:49.299 [2024-07-14 09:44:33.644366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.299 [2024-07-14 09:44:33.644392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.299 qpair failed and we were unable to recover it. 00:34:49.299 [2024-07-14 09:44:33.644578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.299 [2024-07-14 09:44:33.644604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.299 qpair failed and we were unable to recover it. 00:34:49.299 [2024-07-14 09:44:33.644757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.299 [2024-07-14 09:44:33.644784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.299 qpair failed and we were unable to recover it. 00:34:49.299 [2024-07-14 09:44:33.644962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.299 [2024-07-14 09:44:33.644990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.299 qpair failed and we were unable to recover it. 00:34:49.299 [2024-07-14 09:44:33.645182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.299 [2024-07-14 09:44:33.645209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.299 qpair failed and we were unable to recover it. 00:34:49.299 [2024-07-14 09:44:33.645390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.299 [2024-07-14 09:44:33.645416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.299 qpair failed and we were unable to recover it. 00:34:49.299 [2024-07-14 09:44:33.645578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.299 [2024-07-14 09:44:33.645605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.299 qpair failed and we were unable to recover it. 00:34:49.299 [2024-07-14 09:44:33.645766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.299 [2024-07-14 09:44:33.645793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.299 qpair failed and we were unable to recover it. 00:34:49.299 [2024-07-14 09:44:33.645981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.299 [2024-07-14 09:44:33.646008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.299 qpair failed and we were unable to recover it. 00:34:49.299 [2024-07-14 09:44:33.646211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.299 [2024-07-14 09:44:33.646237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.299 qpair failed and we were unable to recover it. 00:34:49.299 [2024-07-14 09:44:33.646427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.299 [2024-07-14 09:44:33.646453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.299 qpair failed and we were unable to recover it. 00:34:49.299 [2024-07-14 09:44:33.646619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.299 [2024-07-14 09:44:33.646647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.299 qpair failed and we were unable to recover it. 00:34:49.299 [2024-07-14 09:44:33.646838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.299 [2024-07-14 09:44:33.646876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.299 qpair failed and we were unable to recover it. 00:34:49.299 [2024-07-14 09:44:33.647061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.299 [2024-07-14 09:44:33.647087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.299 qpair failed and we were unable to recover it. 00:34:49.299 [2024-07-14 09:44:33.647251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.299 [2024-07-14 09:44:33.647279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.299 qpair failed and we were unable to recover it. 00:34:49.299 [2024-07-14 09:44:33.647468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.299 [2024-07-14 09:44:33.647494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.299 qpair failed and we were unable to recover it. 00:34:49.299 [2024-07-14 09:44:33.647657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.299 [2024-07-14 09:44:33.647684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.299 qpair failed and we were unable to recover it. 00:34:49.299 [2024-07-14 09:44:33.647862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.299 [2024-07-14 09:44:33.647896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.299 qpair failed and we were unable to recover it. 00:34:49.299 [2024-07-14 09:44:33.648057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.299 [2024-07-14 09:44:33.648083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.299 qpair failed and we were unable to recover it. 00:34:49.299 [2024-07-14 09:44:33.648276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.299 [2024-07-14 09:44:33.648303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.299 qpair failed and we were unable to recover it. 00:34:49.299 [2024-07-14 09:44:33.648486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.299 [2024-07-14 09:44:33.648511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.299 qpair failed and we were unable to recover it. 00:34:49.299 [2024-07-14 09:44:33.648703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.299 [2024-07-14 09:44:33.648729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.299 qpair failed and we were unable to recover it. 00:34:49.299 [2024-07-14 09:44:33.648918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.299 [2024-07-14 09:44:33.648945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.299 qpair failed and we were unable to recover it. 00:34:49.299 [2024-07-14 09:44:33.649125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.299 [2024-07-14 09:44:33.649151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.299 qpair failed and we were unable to recover it. 00:34:49.299 [2024-07-14 09:44:33.649345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.299 [2024-07-14 09:44:33.649372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.299 qpair failed and we were unable to recover it. 00:34:49.299 [2024-07-14 09:44:33.649569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.299 [2024-07-14 09:44:33.649597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.299 qpair failed and we were unable to recover it. 00:34:49.299 [2024-07-14 09:44:33.649808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.299 [2024-07-14 09:44:33.649835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.299 qpair failed and we were unable to recover it. 00:34:49.299 [2024-07-14 09:44:33.649999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.299 [2024-07-14 09:44:33.650025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.299 qpair failed and we were unable to recover it. 00:34:49.299 [2024-07-14 09:44:33.650209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.299 [2024-07-14 09:44:33.650235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.299 qpair failed and we were unable to recover it. 00:34:49.299 [2024-07-14 09:44:33.650412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.299 [2024-07-14 09:44:33.650438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.299 qpair failed and we were unable to recover it. 00:34:49.299 [2024-07-14 09:44:33.650624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.300 [2024-07-14 09:44:33.650650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.300 qpair failed and we were unable to recover it. 00:34:49.300 [2024-07-14 09:44:33.650812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.300 [2024-07-14 09:44:33.650838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.300 qpair failed and we were unable to recover it. 00:34:49.300 [2024-07-14 09:44:33.651009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.300 [2024-07-14 09:44:33.651036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.300 qpair failed and we were unable to recover it. 00:34:49.300 [2024-07-14 09:44:33.651243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.300 [2024-07-14 09:44:33.651270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.300 qpair failed and we were unable to recover it. 00:34:49.300 [2024-07-14 09:44:33.651476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.300 [2024-07-14 09:44:33.651503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.300 qpair failed and we were unable to recover it. 00:34:49.300 [2024-07-14 09:44:33.651703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.300 [2024-07-14 09:44:33.651729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.300 qpair failed and we were unable to recover it. 00:34:49.300 [2024-07-14 09:44:33.651927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.300 [2024-07-14 09:44:33.651954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.300 qpair failed and we were unable to recover it. 00:34:49.300 [2024-07-14 09:44:33.652149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.300 [2024-07-14 09:44:33.652176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.300 qpair failed and we were unable to recover it. 00:34:49.300 [2024-07-14 09:44:33.652503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.300 [2024-07-14 09:44:33.652529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.300 qpair failed and we were unable to recover it. 00:34:49.300 [2024-07-14 09:44:33.652731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.300 [2024-07-14 09:44:33.652757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.300 qpair failed and we were unable to recover it. 00:34:49.300 [2024-07-14 09:44:33.652920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.300 [2024-07-14 09:44:33.652948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.300 qpair failed and we were unable to recover it. 00:34:49.300 [2024-07-14 09:44:33.653109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.300 [2024-07-14 09:44:33.653136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.300 qpair failed and we were unable to recover it. 00:34:49.300 [2024-07-14 09:44:33.653324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.300 [2024-07-14 09:44:33.653350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.300 qpair failed and we were unable to recover it. 00:34:49.300 [2024-07-14 09:44:33.653521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.300 [2024-07-14 09:44:33.653547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.300 qpair failed and we were unable to recover it. 00:34:49.300 [2024-07-14 09:44:33.653739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.300 [2024-07-14 09:44:33.653766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.300 qpair failed and we were unable to recover it. 00:34:49.300 [2024-07-14 09:44:33.653951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.300 [2024-07-14 09:44:33.653978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.300 qpair failed and we were unable to recover it. 00:34:49.300 [2024-07-14 09:44:33.654162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.300 [2024-07-14 09:44:33.654189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.300 qpair failed and we were unable to recover it. 00:34:49.300 [2024-07-14 09:44:33.654354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.300 [2024-07-14 09:44:33.654381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.300 qpair failed and we were unable to recover it. 00:34:49.300 [2024-07-14 09:44:33.654567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.300 [2024-07-14 09:44:33.654593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.300 qpair failed and we were unable to recover it. 00:34:49.300 [2024-07-14 09:44:33.654763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.300 [2024-07-14 09:44:33.654791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.300 qpair failed and we were unable to recover it. 00:34:49.300 [2024-07-14 09:44:33.654949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.300 [2024-07-14 09:44:33.654976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.300 qpair failed and we were unable to recover it. 00:34:49.300 [2024-07-14 09:44:33.655189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.300 [2024-07-14 09:44:33.655215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.300 qpair failed and we were unable to recover it. 00:34:49.300 [2024-07-14 09:44:33.655377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.300 [2024-07-14 09:44:33.655408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.300 qpair failed and we were unable to recover it. 00:34:49.300 [2024-07-14 09:44:33.655562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.300 [2024-07-14 09:44:33.655589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.300 qpair failed and we were unable to recover it. 00:34:49.300 [2024-07-14 09:44:33.655754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.300 [2024-07-14 09:44:33.655782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.300 qpair failed and we were unable to recover it. 00:34:49.300 [2024-07-14 09:44:33.656067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.300 [2024-07-14 09:44:33.656095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.300 qpair failed and we were unable to recover it. 00:34:49.300 [2024-07-14 09:44:33.656256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.300 [2024-07-14 09:44:33.656282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.300 qpair failed and we were unable to recover it. 00:34:49.300 [2024-07-14 09:44:33.656498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.300 [2024-07-14 09:44:33.656524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.300 qpair failed and we were unable to recover it. 00:34:49.300 [2024-07-14 09:44:33.656698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.300 [2024-07-14 09:44:33.656725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.300 qpair failed and we were unable to recover it. 00:34:49.300 [2024-07-14 09:44:33.656888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.300 [2024-07-14 09:44:33.656917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.300 qpair failed and we were unable to recover it. 00:34:49.300 [2024-07-14 09:44:33.657082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.300 [2024-07-14 09:44:33.657109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.300 qpair failed and we were unable to recover it. 00:34:49.300 [2024-07-14 09:44:33.657276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.300 [2024-07-14 09:44:33.657303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.300 qpair failed and we were unable to recover it. 00:34:49.300 [2024-07-14 09:44:33.657495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.300 [2024-07-14 09:44:33.657521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.300 qpair failed and we were unable to recover it. 00:34:49.300 [2024-07-14 09:44:33.657711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.300 [2024-07-14 09:44:33.657738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.300 qpair failed and we were unable to recover it. 00:34:49.300 [2024-07-14 09:44:33.657911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.300 [2024-07-14 09:44:33.657938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.300 qpair failed and we were unable to recover it. 00:34:49.300 [2024-07-14 09:44:33.658094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.300 [2024-07-14 09:44:33.658120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.300 qpair failed and we were unable to recover it. 00:34:49.300 [2024-07-14 09:44:33.658341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.300 [2024-07-14 09:44:33.658368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.300 qpair failed and we were unable to recover it. 00:34:49.300 [2024-07-14 09:44:33.658534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.300 [2024-07-14 09:44:33.658560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.300 qpair failed and we were unable to recover it. 00:34:49.300 [2024-07-14 09:44:33.658721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.300 [2024-07-14 09:44:33.658747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.300 qpair failed and we were unable to recover it. 00:34:49.300 [2024-07-14 09:44:33.658942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.300 [2024-07-14 09:44:33.658969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.300 qpair failed and we were unable to recover it. 00:34:49.300 [2024-07-14 09:44:33.659127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.300 [2024-07-14 09:44:33.659153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.300 qpair failed and we were unable to recover it. 00:34:49.300 [2024-07-14 09:44:33.659342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.301 [2024-07-14 09:44:33.659368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-07-14 09:44:33.659537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.301 [2024-07-14 09:44:33.659564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-07-14 09:44:33.659764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.301 [2024-07-14 09:44:33.659790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-07-14 09:44:33.659952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.301 [2024-07-14 09:44:33.659978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-07-14 09:44:33.660179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.301 [2024-07-14 09:44:33.660207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-07-14 09:44:33.660391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.301 [2024-07-14 09:44:33.660418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-07-14 09:44:33.660608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.301 [2024-07-14 09:44:33.660634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-07-14 09:44:33.660786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.301 [2024-07-14 09:44:33.660812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-07-14 09:44:33.661005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.301 [2024-07-14 09:44:33.661033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-07-14 09:44:33.661226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.301 [2024-07-14 09:44:33.661253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-07-14 09:44:33.661443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.301 [2024-07-14 09:44:33.661469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-07-14 09:44:33.661638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.301 [2024-07-14 09:44:33.661665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-07-14 09:44:33.661904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.301 [2024-07-14 09:44:33.661932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-07-14 09:44:33.662157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.301 [2024-07-14 09:44:33.662184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-07-14 09:44:33.662342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.301 [2024-07-14 09:44:33.662368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-07-14 09:44:33.662549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.301 [2024-07-14 09:44:33.662575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-07-14 09:44:33.662729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.301 [2024-07-14 09:44:33.662755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-07-14 09:44:33.662940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.301 [2024-07-14 09:44:33.662966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-07-14 09:44:33.663173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.301 [2024-07-14 09:44:33.663199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-07-14 09:44:33.663388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.301 [2024-07-14 09:44:33.663415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-07-14 09:44:33.663588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.301 [2024-07-14 09:44:33.663616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-07-14 09:44:33.663792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.301 [2024-07-14 09:44:33.663823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-07-14 09:44:33.664039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.301 [2024-07-14 09:44:33.664066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-07-14 09:44:33.664270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.301 [2024-07-14 09:44:33.664297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-07-14 09:44:33.664463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.301 [2024-07-14 09:44:33.664490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-07-14 09:44:33.664681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.301 [2024-07-14 09:44:33.664707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-07-14 09:44:33.664876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.301 [2024-07-14 09:44:33.664908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-07-14 09:44:33.665127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.301 [2024-07-14 09:44:33.665154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-07-14 09:44:33.665313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.301 [2024-07-14 09:44:33.665339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-07-14 09:44:33.665532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.301 [2024-07-14 09:44:33.665558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-07-14 09:44:33.665781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.301 [2024-07-14 09:44:33.665807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-07-14 09:44:33.666013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.301 [2024-07-14 09:44:33.666040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-07-14 09:44:33.666240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.301 [2024-07-14 09:44:33.666266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-07-14 09:44:33.666429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.301 [2024-07-14 09:44:33.666455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-07-14 09:44:33.666642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.301 [2024-07-14 09:44:33.666669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-07-14 09:44:33.666847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.301 [2024-07-14 09:44:33.666883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-07-14 09:44:33.667093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.301 [2024-07-14 09:44:33.667119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-07-14 09:44:33.667285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.301 [2024-07-14 09:44:33.667312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-07-14 09:44:33.667464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.301 [2024-07-14 09:44:33.667490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-07-14 09:44:33.667705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.301 [2024-07-14 09:44:33.667731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-07-14 09:44:33.667898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.301 [2024-07-14 09:44:33.667925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-07-14 09:44:33.668123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.302 [2024-07-14 09:44:33.668149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.302 qpair failed and we were unable to recover it. 00:34:49.302 [2024-07-14 09:44:33.668341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.302 [2024-07-14 09:44:33.668367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.302 qpair failed and we were unable to recover it. 00:34:49.302 [2024-07-14 09:44:33.668540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.302 [2024-07-14 09:44:33.668566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.302 qpair failed and we were unable to recover it. 00:34:49.302 [2024-07-14 09:44:33.668754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.302 [2024-07-14 09:44:33.668780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.302 qpair failed and we were unable to recover it. 00:34:49.302 [2024-07-14 09:44:33.668981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.302 [2024-07-14 09:44:33.669009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.302 qpair failed and we were unable to recover it. 00:34:49.302 [2024-07-14 09:44:33.669177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.302 [2024-07-14 09:44:33.669204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.302 qpair failed and we were unable to recover it. 00:34:49.302 [2024-07-14 09:44:33.669424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.302 [2024-07-14 09:44:33.669450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.302 qpair failed and we were unable to recover it. 00:34:49.302 [2024-07-14 09:44:33.669664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.302 [2024-07-14 09:44:33.669691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.302 qpair failed and we were unable to recover it. 00:34:49.302 [2024-07-14 09:44:33.669894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.302 [2024-07-14 09:44:33.669922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.302 qpair failed and we were unable to recover it. 00:34:49.302 [2024-07-14 09:44:33.670106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.302 [2024-07-14 09:44:33.670132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.302 qpair failed and we were unable to recover it. 00:34:49.302 [2024-07-14 09:44:33.670324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.302 [2024-07-14 09:44:33.670351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.302 qpair failed and we were unable to recover it. 00:34:49.302 [2024-07-14 09:44:33.670565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.302 [2024-07-14 09:44:33.670591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.302 qpair failed and we were unable to recover it. 00:34:49.302 [2024-07-14 09:44:33.670749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.302 [2024-07-14 09:44:33.670775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.302 qpair failed and we were unable to recover it. 00:34:49.302 [2024-07-14 09:44:33.670964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.302 [2024-07-14 09:44:33.670992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.302 qpair failed and we were unable to recover it. 00:34:49.302 [2024-07-14 09:44:33.671157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.302 [2024-07-14 09:44:33.671183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.302 qpair failed and we were unable to recover it. 00:34:49.302 [2024-07-14 09:44:33.671366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.302 [2024-07-14 09:44:33.671393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.302 qpair failed and we were unable to recover it. 00:34:49.302 [2024-07-14 09:44:33.671562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.302 [2024-07-14 09:44:33.671588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.302 qpair failed and we were unable to recover it. 00:34:49.302 [2024-07-14 09:44:33.671749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.302 [2024-07-14 09:44:33.671775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.302 qpair failed and we were unable to recover it. 00:34:49.302 [2024-07-14 09:44:33.671938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.302 [2024-07-14 09:44:33.671965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.302 qpair failed and we were unable to recover it. 00:34:49.302 [2024-07-14 09:44:33.672155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.302 [2024-07-14 09:44:33.672181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.302 qpair failed and we were unable to recover it. 00:34:49.302 [2024-07-14 09:44:33.672374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.302 [2024-07-14 09:44:33.672406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.302 qpair failed and we were unable to recover it. 00:34:49.302 [2024-07-14 09:44:33.672584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.302 [2024-07-14 09:44:33.672611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.302 qpair failed and we were unable to recover it. 00:34:49.302 [2024-07-14 09:44:33.672818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.302 [2024-07-14 09:44:33.672844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.302 qpair failed and we were unable to recover it. 00:34:49.302 [2024-07-14 09:44:33.673016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.302 [2024-07-14 09:44:33.673044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.302 qpair failed and we were unable to recover it. 00:34:49.302 [2024-07-14 09:44:33.673219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.302 [2024-07-14 09:44:33.673246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.302 qpair failed and we were unable to recover it. 00:34:49.302 [2024-07-14 09:44:33.673410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.302 [2024-07-14 09:44:33.673437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.302 qpair failed and we were unable to recover it. 00:34:49.302 [2024-07-14 09:44:33.673627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.302 [2024-07-14 09:44:33.673655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.302 qpair failed and we were unable to recover it. 00:34:49.302 [2024-07-14 09:44:33.673842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.302 [2024-07-14 09:44:33.673876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.302 qpair failed and we were unable to recover it. 00:34:49.302 [2024-07-14 09:44:33.674043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.302 [2024-07-14 09:44:33.674069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.302 qpair failed and we were unable to recover it. 00:34:49.302 [2024-07-14 09:44:33.674262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.302 [2024-07-14 09:44:33.674289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.302 qpair failed and we were unable to recover it. 00:34:49.302 [2024-07-14 09:44:33.674486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.302 [2024-07-14 09:44:33.674512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.302 qpair failed and we were unable to recover it. 00:34:49.302 [2024-07-14 09:44:33.674700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.302 [2024-07-14 09:44:33.674727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.302 qpair failed and we were unable to recover it. 00:34:49.302 [2024-07-14 09:44:33.674914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.302 [2024-07-14 09:44:33.674941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.302 qpair failed and we were unable to recover it. 00:34:49.302 [2024-07-14 09:44:33.675138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.302 [2024-07-14 09:44:33.675164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.302 qpair failed and we were unable to recover it. 00:34:49.302 [2024-07-14 09:44:33.675361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.302 [2024-07-14 09:44:33.675387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.302 qpair failed and we were unable to recover it. 00:34:49.302 [2024-07-14 09:44:33.675549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.302 [2024-07-14 09:44:33.675575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.302 qpair failed and we were unable to recover it. 00:34:49.302 [2024-07-14 09:44:33.675765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.302 [2024-07-14 09:44:33.675791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.302 qpair failed and we were unable to recover it. 00:34:49.302 [2024-07-14 09:44:33.675951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.302 [2024-07-14 09:44:33.675978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.302 qpair failed and we were unable to recover it. 00:34:49.302 [2024-07-14 09:44:33.676162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.302 [2024-07-14 09:44:33.676188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.302 qpair failed and we were unable to recover it. 00:34:49.302 [2024-07-14 09:44:33.676371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.302 [2024-07-14 09:44:33.676397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.302 qpair failed and we were unable to recover it. 00:34:49.302 [2024-07-14 09:44:33.676554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.302 [2024-07-14 09:44:33.676580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.302 qpair failed and we were unable to recover it. 00:34:49.302 [2024-07-14 09:44:33.676760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.303 [2024-07-14 09:44:33.676786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.303 qpair failed and we were unable to recover it. 00:34:49.303 [2024-07-14 09:44:33.676961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.303 [2024-07-14 09:44:33.676990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.303 qpair failed and we were unable to recover it. 00:34:49.303 [2024-07-14 09:44:33.677156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.303 [2024-07-14 09:44:33.677183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.303 qpair failed and we were unable to recover it. 00:34:49.303 [2024-07-14 09:44:33.677366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.303 [2024-07-14 09:44:33.677392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.303 qpair failed and we were unable to recover it. 00:34:49.303 [2024-07-14 09:44:33.677574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.303 [2024-07-14 09:44:33.677600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.303 qpair failed and we were unable to recover it. 00:34:49.303 [2024-07-14 09:44:33.677761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.303 [2024-07-14 09:44:33.677789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.303 qpair failed and we were unable to recover it. 00:34:49.303 [2024-07-14 09:44:33.677986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.303 [2024-07-14 09:44:33.678014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.303 qpair failed and we were unable to recover it. 00:34:49.303 [2024-07-14 09:44:33.678232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.303 [2024-07-14 09:44:33.678259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.303 qpair failed and we were unable to recover it. 00:34:49.303 [2024-07-14 09:44:33.678481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.303 [2024-07-14 09:44:33.678507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.303 qpair failed and we were unable to recover it. 00:34:49.303 [2024-07-14 09:44:33.678697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.303 [2024-07-14 09:44:33.678723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.303 qpair failed and we were unable to recover it. 00:34:49.303 [2024-07-14 09:44:33.678914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.303 [2024-07-14 09:44:33.678941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.303 qpair failed and we were unable to recover it. 00:34:49.303 [2024-07-14 09:44:33.679136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.303 [2024-07-14 09:44:33.679162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.303 qpair failed and we were unable to recover it. 00:34:49.303 [2024-07-14 09:44:33.679350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.303 [2024-07-14 09:44:33.679376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.303 qpair failed and we were unable to recover it. 00:34:49.303 [2024-07-14 09:44:33.679569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.303 [2024-07-14 09:44:33.679595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.303 qpair failed and we were unable to recover it. 00:34:49.303 [2024-07-14 09:44:33.679779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.303 [2024-07-14 09:44:33.679805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.303 qpair failed and we were unable to recover it. 00:34:49.303 [2024-07-14 09:44:33.679968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.303 [2024-07-14 09:44:33.679995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.303 qpair failed and we were unable to recover it. 00:34:49.303 [2024-07-14 09:44:33.680154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.303 [2024-07-14 09:44:33.680180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.303 qpair failed and we were unable to recover it. 00:34:49.303 [2024-07-14 09:44:33.680370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.303 [2024-07-14 09:44:33.680396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.303 qpair failed and we were unable to recover it. 00:34:49.303 [2024-07-14 09:44:33.680581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.303 [2024-07-14 09:44:33.680607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.303 qpair failed and we were unable to recover it. 00:34:49.303 [2024-07-14 09:44:33.680785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.303 [2024-07-14 09:44:33.680816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.303 qpair failed and we were unable to recover it. 00:34:49.303 [2024-07-14 09:44:33.681017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.303 [2024-07-14 09:44:33.681044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.303 qpair failed and we were unable to recover it. 00:34:49.303 [2024-07-14 09:44:33.681198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.303 [2024-07-14 09:44:33.681223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.303 qpair failed and we were unable to recover it. 00:34:49.303 [2024-07-14 09:44:33.681445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.303 [2024-07-14 09:44:33.681471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.303 qpair failed and we were unable to recover it. 00:34:49.303 [2024-07-14 09:44:33.681670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.303 [2024-07-14 09:44:33.681697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.303 qpair failed and we were unable to recover it. 00:34:49.303 [2024-07-14 09:44:33.681883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.303 [2024-07-14 09:44:33.681910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.303 qpair failed and we were unable to recover it. 00:34:49.303 [2024-07-14 09:44:33.682133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.303 [2024-07-14 09:44:33.682159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.303 qpair failed and we were unable to recover it. 00:34:49.303 [2024-07-14 09:44:33.682344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.303 [2024-07-14 09:44:33.682369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.303 qpair failed and we were unable to recover it. 00:34:49.303 [2024-07-14 09:44:33.682555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.303 [2024-07-14 09:44:33.682581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.303 qpair failed and we were unable to recover it. 00:34:49.303 [2024-07-14 09:44:33.682770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.303 [2024-07-14 09:44:33.682797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.303 qpair failed and we were unable to recover it. 00:34:49.303 [2024-07-14 09:44:33.682984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.303 [2024-07-14 09:44:33.683012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.303 qpair failed and we were unable to recover it. 00:34:49.303 [2024-07-14 09:44:33.683167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.303 [2024-07-14 09:44:33.683194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.303 qpair failed and we were unable to recover it. 00:34:49.303 [2024-07-14 09:44:33.683349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.303 [2024-07-14 09:44:33.683374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.303 qpair failed and we were unable to recover it. 00:34:49.303 [2024-07-14 09:44:33.683598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.303 [2024-07-14 09:44:33.683625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.303 qpair failed and we were unable to recover it. 00:34:49.303 [2024-07-14 09:44:33.683793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.303 [2024-07-14 09:44:33.683819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.303 qpair failed and we were unable to recover it. 00:34:49.303 [2024-07-14 09:44:33.684029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.303 [2024-07-14 09:44:33.684055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.303 qpair failed and we were unable to recover it. 00:34:49.303 [2024-07-14 09:44:33.684238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.303 [2024-07-14 09:44:33.684264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.303 qpair failed and we were unable to recover it. 00:34:49.303 [2024-07-14 09:44:33.684454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.303 [2024-07-14 09:44:33.684480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.304 qpair failed and we were unable to recover it. 00:34:49.304 [2024-07-14 09:44:33.684642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.304 [2024-07-14 09:44:33.684669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.304 qpair failed and we were unable to recover it. 00:34:49.304 [2024-07-14 09:44:33.684852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.304 [2024-07-14 09:44:33.684891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.304 qpair failed and we were unable to recover it. 00:34:49.304 [2024-07-14 09:44:33.685117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.304 [2024-07-14 09:44:33.685143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.304 qpair failed and we were unable to recover it. 00:34:49.304 [2024-07-14 09:44:33.685330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.304 [2024-07-14 09:44:33.685356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.304 qpair failed and we were unable to recover it. 00:34:49.304 [2024-07-14 09:44:33.685569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.304 [2024-07-14 09:44:33.685595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.304 qpair failed and we were unable to recover it. 00:34:49.304 [2024-07-14 09:44:33.685779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.304 [2024-07-14 09:44:33.685804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.304 qpair failed and we were unable to recover it. 00:34:49.304 [2024-07-14 09:44:33.685998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.304 [2024-07-14 09:44:33.686025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.304 qpair failed and we were unable to recover it. 00:34:49.304 [2024-07-14 09:44:33.686217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.304 [2024-07-14 09:44:33.686243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.304 qpair failed and we were unable to recover it. 00:34:49.304 [2024-07-14 09:44:33.686403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.304 [2024-07-14 09:44:33.686429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.304 qpair failed and we were unable to recover it. 00:34:49.304 [2024-07-14 09:44:33.686595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.304 [2024-07-14 09:44:33.686623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.304 qpair failed and we were unable to recover it. 00:34:49.304 [2024-07-14 09:44:33.686792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.304 [2024-07-14 09:44:33.686820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.304 qpair failed and we were unable to recover it. 00:34:49.304 [2024-07-14 09:44:33.687063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.304 [2024-07-14 09:44:33.687090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.304 qpair failed and we were unable to recover it. 00:34:49.304 [2024-07-14 09:44:33.687273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.304 [2024-07-14 09:44:33.687299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.304 qpair failed and we were unable to recover it. 00:34:49.304 [2024-07-14 09:44:33.687482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.304 [2024-07-14 09:44:33.687508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.304 qpair failed and we were unable to recover it. 00:34:49.304 [2024-07-14 09:44:33.687699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.304 [2024-07-14 09:44:33.687726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.304 qpair failed and we were unable to recover it. 00:34:49.304 [2024-07-14 09:44:33.687898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.304 [2024-07-14 09:44:33.687926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.304 qpair failed and we were unable to recover it. 00:34:49.304 [2024-07-14 09:44:33.688103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.304 [2024-07-14 09:44:33.688130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.304 qpair failed and we were unable to recover it. 00:34:49.304 [2024-07-14 09:44:33.688322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.304 [2024-07-14 09:44:33.688348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.304 qpair failed and we were unable to recover it. 00:34:49.304 [2024-07-14 09:44:33.688540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.304 [2024-07-14 09:44:33.688566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.304 qpair failed and we were unable to recover it. 00:34:49.304 [2024-07-14 09:44:33.688732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.304 [2024-07-14 09:44:33.688760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.304 qpair failed and we were unable to recover it. 00:34:49.304 [2024-07-14 09:44:33.688970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.304 [2024-07-14 09:44:33.688998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.304 qpair failed and we were unable to recover it. 00:34:49.304 [2024-07-14 09:44:33.689181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.304 [2024-07-14 09:44:33.689206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.304 qpair failed and we were unable to recover it. 00:34:49.304 [2024-07-14 09:44:33.689405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.304 [2024-07-14 09:44:33.689436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.304 qpair failed and we were unable to recover it. 00:34:49.304 [2024-07-14 09:44:33.689625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.304 [2024-07-14 09:44:33.689652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.304 qpair failed and we were unable to recover it. 00:34:49.304 [2024-07-14 09:44:33.689858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.304 [2024-07-14 09:44:33.689891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.304 qpair failed and we were unable to recover it. 00:34:49.304 [2024-07-14 09:44:33.690048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.304 [2024-07-14 09:44:33.690075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.304 qpair failed and we were unable to recover it. 00:34:49.304 [2024-07-14 09:44:33.690233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.304 [2024-07-14 09:44:33.690260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.304 qpair failed and we were unable to recover it. 00:34:49.304 [2024-07-14 09:44:33.690446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.304 [2024-07-14 09:44:33.690473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.304 qpair failed and we were unable to recover it. 00:34:49.304 [2024-07-14 09:44:33.690681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.304 [2024-07-14 09:44:33.690707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.304 qpair failed and we were unable to recover it. 00:34:49.304 [2024-07-14 09:44:33.690900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.304 [2024-07-14 09:44:33.690929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.304 qpair failed and we were unable to recover it. 00:34:49.304 [2024-07-14 09:44:33.691136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.304 [2024-07-14 09:44:33.691163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.304 qpair failed and we were unable to recover it. 00:34:49.304 [2024-07-14 09:44:33.691353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.304 [2024-07-14 09:44:33.691380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.304 qpair failed and we were unable to recover it. 00:34:49.304 [2024-07-14 09:44:33.691549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.304 [2024-07-14 09:44:33.691576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.304 qpair failed and we were unable to recover it. 00:34:49.304 [2024-07-14 09:44:33.691771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.304 [2024-07-14 09:44:33.691798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.304 qpair failed and we were unable to recover it. 00:34:49.304 [2024-07-14 09:44:33.691989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.304 [2024-07-14 09:44:33.692016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.304 qpair failed and we were unable to recover it. 00:34:49.304 [2024-07-14 09:44:33.692203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.304 [2024-07-14 09:44:33.692230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.304 qpair failed and we were unable to recover it. 00:34:49.304 [2024-07-14 09:44:33.692422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.304 [2024-07-14 09:44:33.692448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.304 qpair failed and we were unable to recover it. 00:34:49.304 [2024-07-14 09:44:33.692614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.304 [2024-07-14 09:44:33.692641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.304 qpair failed and we were unable to recover it. 00:34:49.304 [2024-07-14 09:44:33.692831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.304 [2024-07-14 09:44:33.692857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.304 qpair failed and we were unable to recover it. 00:34:49.304 [2024-07-14 09:44:33.693043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.304 [2024-07-14 09:44:33.693070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.304 qpair failed and we were unable to recover it. 00:34:49.304 [2024-07-14 09:44:33.693241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.304 [2024-07-14 09:44:33.693268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.305 qpair failed and we were unable to recover it. 00:34:49.305 [2024-07-14 09:44:33.693421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.305 [2024-07-14 09:44:33.693447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.305 qpair failed and we were unable to recover it. 00:34:49.305 [2024-07-14 09:44:33.693601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.305 [2024-07-14 09:44:33.693627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.305 qpair failed and we were unable to recover it. 00:34:49.305 [2024-07-14 09:44:33.693804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.305 [2024-07-14 09:44:33.693830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.305 qpair failed and we were unable to recover it. 00:34:49.305 [2024-07-14 09:44:33.694023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.305 [2024-07-14 09:44:33.694050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.305 qpair failed and we were unable to recover it. 00:34:49.305 [2024-07-14 09:44:33.694246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.305 [2024-07-14 09:44:33.694273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.305 qpair failed and we were unable to recover it. 00:34:49.305 [2024-07-14 09:44:33.694437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.305 [2024-07-14 09:44:33.694464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.305 qpair failed and we were unable to recover it. 00:34:49.305 [2024-07-14 09:44:33.694625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.305 [2024-07-14 09:44:33.694652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.305 qpair failed and we were unable to recover it. 00:34:49.305 [2024-07-14 09:44:33.694842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.305 [2024-07-14 09:44:33.694876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.305 qpair failed and we were unable to recover it. 00:34:49.305 [2024-07-14 09:44:33.695115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.305 [2024-07-14 09:44:33.695152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.305 qpair failed and we were unable to recover it. 00:34:49.305 [2024-07-14 09:44:33.695346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.305 [2024-07-14 09:44:33.695374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.305 qpair failed and we were unable to recover it. 00:34:49.305 [2024-07-14 09:44:33.695537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.305 [2024-07-14 09:44:33.695564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.305 qpair failed and we were unable to recover it. 00:34:49.305 [2024-07-14 09:44:33.695729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.305 [2024-07-14 09:44:33.695755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.305 qpair failed and we were unable to recover it. 00:34:49.305 [2024-07-14 09:44:33.695954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.305 [2024-07-14 09:44:33.695989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.305 qpair failed and we were unable to recover it. 00:34:49.305 [2024-07-14 09:44:33.696157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.305 [2024-07-14 09:44:33.696183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.305 qpair failed and we were unable to recover it. 00:34:49.305 [2024-07-14 09:44:33.696373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.305 [2024-07-14 09:44:33.696398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.305 qpair failed and we were unable to recover it. 00:34:49.305 [2024-07-14 09:44:33.696614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.305 [2024-07-14 09:44:33.696639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.305 qpair failed and we were unable to recover it. 00:34:49.305 [2024-07-14 09:44:33.696805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.305 [2024-07-14 09:44:33.696830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.305 qpair failed and we were unable to recover it. 00:34:49.305 [2024-07-14 09:44:33.697030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.305 [2024-07-14 09:44:33.697056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.305 qpair failed and we were unable to recover it. 00:34:49.305 [2024-07-14 09:44:33.697264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.305 [2024-07-14 09:44:33.697290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.305 qpair failed and we were unable to recover it. 00:34:49.305 [2024-07-14 09:44:33.697478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.305 [2024-07-14 09:44:33.697505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.305 qpair failed and we were unable to recover it. 00:34:49.305 [2024-07-14 09:44:33.697702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.305 [2024-07-14 09:44:33.697727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.305 qpair failed and we were unable to recover it. 00:34:49.305 [2024-07-14 09:44:33.697894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.305 [2024-07-14 09:44:33.697935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.305 qpair failed and we were unable to recover it. 00:34:49.305 [2024-07-14 09:44:33.698126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.305 [2024-07-14 09:44:33.698151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.305 qpair failed and we were unable to recover it. 00:34:49.305 [2024-07-14 09:44:33.698310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.305 [2024-07-14 09:44:33.698336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.305 qpair failed and we were unable to recover it. 00:34:49.305 [2024-07-14 09:44:33.698548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.305 [2024-07-14 09:44:33.698574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.305 qpair failed and we were unable to recover it. 00:34:49.305 [2024-07-14 09:44:33.698740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.305 [2024-07-14 09:44:33.698766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.305 qpair failed and we were unable to recover it. 00:34:49.305 [2024-07-14 09:44:33.698940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.305 [2024-07-14 09:44:33.698966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.305 qpair failed and we were unable to recover it. 00:34:49.305 [2024-07-14 09:44:33.699163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.305 [2024-07-14 09:44:33.699189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.305 qpair failed and we were unable to recover it. 00:34:49.305 [2024-07-14 09:44:33.699350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.305 [2024-07-14 09:44:33.699375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.305 qpair failed and we were unable to recover it. 00:34:49.305 [2024-07-14 09:44:33.699534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.305 [2024-07-14 09:44:33.699560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.305 qpair failed and we were unable to recover it. 00:34:49.305 [2024-07-14 09:44:33.699774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.305 [2024-07-14 09:44:33.699800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.305 qpair failed and we were unable to recover it. 00:34:49.305 [2024-07-14 09:44:33.699985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.305 [2024-07-14 09:44:33.700012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.305 qpair failed and we were unable to recover it. 00:34:49.305 [2024-07-14 09:44:33.700169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.305 [2024-07-14 09:44:33.700195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.305 qpair failed and we were unable to recover it. 00:34:49.305 [2024-07-14 09:44:33.700385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.305 [2024-07-14 09:44:33.700410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.305 qpair failed and we were unable to recover it. 00:34:49.305 [2024-07-14 09:44:33.700576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.305 [2024-07-14 09:44:33.700603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.305 qpair failed and we were unable to recover it. 00:34:49.305 [2024-07-14 09:44:33.700808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.305 [2024-07-14 09:44:33.700849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.305 qpair failed and we were unable to recover it. 00:34:49.305 [2024-07-14 09:44:33.701062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.305 [2024-07-14 09:44:33.701090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.305 qpair failed and we were unable to recover it. 00:34:49.305 [2024-07-14 09:44:33.701310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.305 [2024-07-14 09:44:33.701337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.305 qpair failed and we were unable to recover it. 00:34:49.305 [2024-07-14 09:44:33.701526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.305 [2024-07-14 09:44:33.701553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.305 qpair failed and we were unable to recover it. 00:34:49.305 [2024-07-14 09:44:33.701761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.305 [2024-07-14 09:44:33.701787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.305 qpair failed and we were unable to recover it. 00:34:49.305 [2024-07-14 09:44:33.701951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.306 [2024-07-14 09:44:33.701979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.306 qpair failed and we were unable to recover it. 00:34:49.306 [2024-07-14 09:44:33.702191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.306 [2024-07-14 09:44:33.702218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.306 qpair failed and we were unable to recover it. 00:34:49.306 [2024-07-14 09:44:33.702381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.306 [2024-07-14 09:44:33.702407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.306 qpair failed and we were unable to recover it. 00:34:49.306 [2024-07-14 09:44:33.702588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.306 [2024-07-14 09:44:33.702615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.306 qpair failed and we were unable to recover it. 00:34:49.306 [2024-07-14 09:44:33.702789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.306 [2024-07-14 09:44:33.702815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.306 qpair failed and we were unable to recover it. 00:34:49.306 [2024-07-14 09:44:33.703014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.306 [2024-07-14 09:44:33.703041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.306 qpair failed and we were unable to recover it. 00:34:49.306 [2024-07-14 09:44:33.703226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.306 [2024-07-14 09:44:33.703253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.306 qpair failed and we were unable to recover it. 00:34:49.306 [2024-07-14 09:44:33.703446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.306 [2024-07-14 09:44:33.703474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.306 qpair failed and we were unable to recover it. 00:34:49.306 [2024-07-14 09:44:33.703669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.306 [2024-07-14 09:44:33.703695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.306 qpair failed and we were unable to recover it. 00:34:49.306 [2024-07-14 09:44:33.703880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.306 [2024-07-14 09:44:33.703914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.306 qpair failed and we were unable to recover it. 00:34:49.306 [2024-07-14 09:44:33.704098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.306 [2024-07-14 09:44:33.704124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.306 qpair failed and we were unable to recover it. 00:34:49.306 [2024-07-14 09:44:33.704280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.306 [2024-07-14 09:44:33.704306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.306 qpair failed and we were unable to recover it. 00:34:49.306 [2024-07-14 09:44:33.704496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.306 [2024-07-14 09:44:33.704521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.306 qpair failed and we were unable to recover it. 00:34:49.306 [2024-07-14 09:44:33.704674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.306 [2024-07-14 09:44:33.704700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.306 qpair failed and we were unable to recover it. 00:34:49.306 [2024-07-14 09:44:33.704884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.306 [2024-07-14 09:44:33.704920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.306 qpair failed and we were unable to recover it. 00:34:49.306 [2024-07-14 09:44:33.705080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.306 [2024-07-14 09:44:33.705105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.306 qpair failed and we were unable to recover it. 00:34:49.306 [2024-07-14 09:44:33.705305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.306 [2024-07-14 09:44:33.705330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.306 qpair failed and we were unable to recover it. 00:34:49.306 [2024-07-14 09:44:33.705494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.306 [2024-07-14 09:44:33.705519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.306 qpair failed and we were unable to recover it. 00:34:49.306 [2024-07-14 09:44:33.705676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.306 [2024-07-14 09:44:33.705701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.306 qpair failed and we were unable to recover it. 00:34:49.306 [2024-07-14 09:44:33.705869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.306 [2024-07-14 09:44:33.705896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.306 qpair failed and we were unable to recover it. 00:34:49.306 [2024-07-14 09:44:33.706057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.306 [2024-07-14 09:44:33.706084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.306 qpair failed and we were unable to recover it. 00:34:49.306 [2024-07-14 09:44:33.706282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.306 [2024-07-14 09:44:33.706308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.306 qpair failed and we were unable to recover it. 00:34:49.306 [2024-07-14 09:44:33.706474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.306 [2024-07-14 09:44:33.706499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.306 qpair failed and we were unable to recover it. 00:34:49.306 [2024-07-14 09:44:33.706656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.306 [2024-07-14 09:44:33.706682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.306 qpair failed and we were unable to recover it. 00:34:49.306 [2024-07-14 09:44:33.706882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.306 [2024-07-14 09:44:33.706918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.306 qpair failed and we were unable to recover it. 00:34:49.306 [2024-07-14 09:44:33.707078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.306 [2024-07-14 09:44:33.707104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.306 qpair failed and we were unable to recover it. 00:34:49.306 [2024-07-14 09:44:33.707291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.306 [2024-07-14 09:44:33.707317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.306 qpair failed and we were unable to recover it. 00:34:49.306 [2024-07-14 09:44:33.707495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.306 [2024-07-14 09:44:33.707521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.306 qpair failed and we were unable to recover it. 00:34:49.306 [2024-07-14 09:44:33.707703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.306 [2024-07-14 09:44:33.707729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.306 qpair failed and we were unable to recover it. 00:34:49.306 [2024-07-14 09:44:33.707899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.306 [2024-07-14 09:44:33.707927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.306 qpair failed and we were unable to recover it. 00:34:49.306 [2024-07-14 09:44:33.708117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.306 [2024-07-14 09:44:33.708142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.306 qpair failed and we were unable to recover it. 00:34:49.572 [2024-07-14 09:44:33.708302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.572 [2024-07-14 09:44:33.708328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.572 qpair failed and we were unable to recover it. 00:34:49.572 [2024-07-14 09:44:33.708485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.572 [2024-07-14 09:44:33.708511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.572 qpair failed and we were unable to recover it. 00:34:49.572 [2024-07-14 09:44:33.708670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.572 [2024-07-14 09:44:33.708696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.572 qpair failed and we were unable to recover it. 00:34:49.572 [2024-07-14 09:44:33.708859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.572 [2024-07-14 09:44:33.708889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.572 qpair failed and we were unable to recover it. 00:34:49.572 [2024-07-14 09:44:33.709062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.572 [2024-07-14 09:44:33.709088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.572 qpair failed and we were unable to recover it. 00:34:49.572 [2024-07-14 09:44:33.709283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.572 [2024-07-14 09:44:33.709308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.572 qpair failed and we were unable to recover it. 00:34:49.572 [2024-07-14 09:44:33.709487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.572 [2024-07-14 09:44:33.709513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.572 qpair failed and we were unable to recover it. 00:34:49.572 [2024-07-14 09:44:33.709706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.572 [2024-07-14 09:44:33.709731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.572 qpair failed and we were unable to recover it. 00:34:49.572 [2024-07-14 09:44:33.709927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.572 [2024-07-14 09:44:33.709954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.572 qpair failed and we were unable to recover it. 00:34:49.572 [2024-07-14 09:44:33.710119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.572 [2024-07-14 09:44:33.710145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.572 qpair failed and we were unable to recover it. 00:34:49.572 [2024-07-14 09:44:33.710302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.572 [2024-07-14 09:44:33.710327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.572 qpair failed and we were unable to recover it. 00:34:49.572 [2024-07-14 09:44:33.710533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.573 [2024-07-14 09:44:33.710558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.573 qpair failed and we were unable to recover it. 00:34:49.573 [2024-07-14 09:44:33.710749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.573 [2024-07-14 09:44:33.710774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.573 qpair failed and we were unable to recover it. 00:34:49.573 [2024-07-14 09:44:33.710962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.573 [2024-07-14 09:44:33.710991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.573 qpair failed and we were unable to recover it. 00:34:49.573 [2024-07-14 09:44:33.711159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.573 [2024-07-14 09:44:33.711184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.573 qpair failed and we were unable to recover it. 00:34:49.573 [2024-07-14 09:44:33.711368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.573 [2024-07-14 09:44:33.711393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.573 qpair failed and we were unable to recover it. 00:34:49.573 [2024-07-14 09:44:33.711550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.573 [2024-07-14 09:44:33.711575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.573 qpair failed and we were unable to recover it. 00:34:49.573 [2024-07-14 09:44:33.711779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.573 [2024-07-14 09:44:33.711809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.573 qpair failed and we were unable to recover it. 00:34:49.573 [2024-07-14 09:44:33.711966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.573 [2024-07-14 09:44:33.711994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.573 qpair failed and we were unable to recover it. 00:34:49.573 [2024-07-14 09:44:33.712188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.573 [2024-07-14 09:44:33.712214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.573 qpair failed and we were unable to recover it. 00:34:49.573 [2024-07-14 09:44:33.712394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.573 [2024-07-14 09:44:33.712421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.573 qpair failed and we were unable to recover it. 00:34:49.573 [2024-07-14 09:44:33.712587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.573 [2024-07-14 09:44:33.712612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.573 qpair failed and we were unable to recover it. 00:34:49.573 [2024-07-14 09:44:33.712829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.573 [2024-07-14 09:44:33.712854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.573 qpair failed and we were unable to recover it. 00:34:49.573 [2024-07-14 09:44:33.713042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.573 [2024-07-14 09:44:33.713069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.573 qpair failed and we were unable to recover it. 00:34:49.573 [2024-07-14 09:44:33.713269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.573 [2024-07-14 09:44:33.713294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.573 qpair failed and we were unable to recover it. 00:34:49.573 [2024-07-14 09:44:33.713453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.573 [2024-07-14 09:44:33.713479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.573 qpair failed and we were unable to recover it. 00:34:49.573 [2024-07-14 09:44:33.713696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.573 [2024-07-14 09:44:33.713721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.573 qpair failed and we were unable to recover it. 00:34:49.573 [2024-07-14 09:44:33.713890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.573 [2024-07-14 09:44:33.713922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.573 qpair failed and we were unable to recover it. 00:34:49.573 [2024-07-14 09:44:33.714120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.573 [2024-07-14 09:44:33.714147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.573 qpair failed and we were unable to recover it. 00:34:49.573 [2024-07-14 09:44:33.714315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.573 [2024-07-14 09:44:33.714341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.573 qpair failed and we were unable to recover it. 00:34:49.573 [2024-07-14 09:44:33.714510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.573 [2024-07-14 09:44:33.714536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.573 qpair failed and we were unable to recover it. 00:34:49.573 [2024-07-14 09:44:33.714695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.573 [2024-07-14 09:44:33.714720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.573 qpair failed and we were unable to recover it. 00:34:49.573 [2024-07-14 09:44:33.714935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.573 [2024-07-14 09:44:33.714961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.573 qpair failed and we were unable to recover it. 00:34:49.573 [2024-07-14 09:44:33.715155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.573 [2024-07-14 09:44:33.715181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.573 qpair failed and we were unable to recover it. 00:34:49.573 [2024-07-14 09:44:33.715372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.573 [2024-07-14 09:44:33.715398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.573 qpair failed and we were unable to recover it. 00:34:49.573 [2024-07-14 09:44:33.715563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.573 [2024-07-14 09:44:33.715590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.573 qpair failed and we were unable to recover it. 00:34:49.573 [2024-07-14 09:44:33.715766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.573 [2024-07-14 09:44:33.715792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.573 qpair failed and we were unable to recover it. 00:34:49.573 [2024-07-14 09:44:33.715958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.573 [2024-07-14 09:44:33.715986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.573 qpair failed and we were unable to recover it. 00:34:49.573 [2024-07-14 09:44:33.716170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.573 [2024-07-14 09:44:33.716196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.573 qpair failed and we were unable to recover it. 00:34:49.573 [2024-07-14 09:44:33.716410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.573 [2024-07-14 09:44:33.716436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.573 qpair failed and we were unable to recover it. 00:34:49.573 [2024-07-14 09:44:33.716604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.573 [2024-07-14 09:44:33.716631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.573 qpair failed and we were unable to recover it. 00:34:49.573 [2024-07-14 09:44:33.716788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.573 [2024-07-14 09:44:33.716814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.573 qpair failed and we were unable to recover it. 00:34:49.573 [2024-07-14 09:44:33.716983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.573 [2024-07-14 09:44:33.717015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.573 qpair failed and we were unable to recover it. 00:34:49.573 [2024-07-14 09:44:33.717199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.573 [2024-07-14 09:44:33.717225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.573 qpair failed and we were unable to recover it. 00:34:49.573 [2024-07-14 09:44:33.717393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.573 [2024-07-14 09:44:33.717419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.573 qpair failed and we were unable to recover it. 00:34:49.573 [2024-07-14 09:44:33.717581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.573 [2024-07-14 09:44:33.717606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.573 qpair failed and we were unable to recover it. 00:34:49.573 [2024-07-14 09:44:33.717767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.573 [2024-07-14 09:44:33.717792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.573 qpair failed and we were unable to recover it. 00:34:49.573 [2024-07-14 09:44:33.717966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.573 [2024-07-14 09:44:33.717993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.573 qpair failed and we were unable to recover it. 00:34:49.573 [2024-07-14 09:44:33.718192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.573 [2024-07-14 09:44:33.718218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.573 qpair failed and we were unable to recover it. 00:34:49.573 [2024-07-14 09:44:33.718404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.573 [2024-07-14 09:44:33.718429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.573 qpair failed and we were unable to recover it. 00:34:49.573 [2024-07-14 09:44:33.718590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.573 [2024-07-14 09:44:33.718615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.573 qpair failed and we were unable to recover it. 00:34:49.573 [2024-07-14 09:44:33.718767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.574 [2024-07-14 09:44:33.718792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.574 qpair failed and we were unable to recover it. 00:34:49.574 [2024-07-14 09:44:33.718962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.574 [2024-07-14 09:44:33.718988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.574 qpair failed and we were unable to recover it. 00:34:49.574 [2024-07-14 09:44:33.719156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.574 [2024-07-14 09:44:33.719182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.574 qpair failed and we were unable to recover it. 00:34:49.574 [2024-07-14 09:44:33.719365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.574 [2024-07-14 09:44:33.719391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.574 qpair failed and we were unable to recover it. 00:34:49.574 [2024-07-14 09:44:33.719588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.574 [2024-07-14 09:44:33.719613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.574 qpair failed and we were unable to recover it. 00:34:49.574 [2024-07-14 09:44:33.719780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.574 [2024-07-14 09:44:33.719805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.574 qpair failed and we were unable to recover it. 00:34:49.574 [2024-07-14 09:44:33.719979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.574 [2024-07-14 09:44:33.720024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.574 qpair failed and we were unable to recover it. 00:34:49.574 [2024-07-14 09:44:33.720221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.574 [2024-07-14 09:44:33.720249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.574 qpair failed and we were unable to recover it. 00:34:49.574 [2024-07-14 09:44:33.720413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.574 [2024-07-14 09:44:33.720440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.574 qpair failed and we were unable to recover it. 00:34:49.574 [2024-07-14 09:44:33.720646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.574 [2024-07-14 09:44:33.720672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.574 qpair failed and we were unable to recover it. 00:34:49.574 [2024-07-14 09:44:33.720884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.574 [2024-07-14 09:44:33.720916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.574 qpair failed and we were unable to recover it. 00:34:49.574 [2024-07-14 09:44:33.721106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.574 [2024-07-14 09:44:33.721133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.574 qpair failed and we were unable to recover it. 00:34:49.574 [2024-07-14 09:44:33.721298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.574 [2024-07-14 09:44:33.721324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.574 qpair failed and we were unable to recover it. 00:34:49.574 [2024-07-14 09:44:33.721515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.574 [2024-07-14 09:44:33.721542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.574 qpair failed and we were unable to recover it. 00:34:49.574 [2024-07-14 09:44:33.721738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.574 [2024-07-14 09:44:33.721765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.574 qpair failed and we were unable to recover it. 00:34:49.574 [2024-07-14 09:44:33.721936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.574 [2024-07-14 09:44:33.721964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.574 qpair failed and we were unable to recover it. 00:34:49.574 [2024-07-14 09:44:33.722177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.574 [2024-07-14 09:44:33.722203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.574 qpair failed and we were unable to recover it. 00:34:49.574 [2024-07-14 09:44:33.722369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.574 [2024-07-14 09:44:33.722396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.574 qpair failed and we were unable to recover it. 00:34:49.574 [2024-07-14 09:44:33.722557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.574 [2024-07-14 09:44:33.722583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.574 qpair failed and we were unable to recover it. 00:34:49.574 [2024-07-14 09:44:33.722752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.574 [2024-07-14 09:44:33.722778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.574 qpair failed and we were unable to recover it. 00:34:49.574 [2024-07-14 09:44:33.722958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.574 [2024-07-14 09:44:33.722984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.574 qpair failed and we were unable to recover it. 00:34:49.574 [2024-07-14 09:44:33.723141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.574 [2024-07-14 09:44:33.723167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.574 qpair failed and we were unable to recover it. 00:34:49.574 [2024-07-14 09:44:33.723347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.574 [2024-07-14 09:44:33.723373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.574 qpair failed and we were unable to recover it. 00:34:49.574 [2024-07-14 09:44:33.723525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.574 [2024-07-14 09:44:33.723551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.574 qpair failed and we were unable to recover it. 00:34:49.574 [2024-07-14 09:44:33.723710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.574 [2024-07-14 09:44:33.723738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.574 qpair failed and we were unable to recover it. 00:34:49.574 [2024-07-14 09:44:33.723920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.574 [2024-07-14 09:44:33.723947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.574 qpair failed and we were unable to recover it. 00:34:49.574 [2024-07-14 09:44:33.724115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.574 [2024-07-14 09:44:33.724142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.574 qpair failed and we were unable to recover it. 00:34:49.574 09:44:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:49.574 [2024-07-14 09:44:33.724342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.574 [2024-07-14 09:44:33.724370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.574 qpair failed and we were unable to recover it. 00:34:49.574 09:44:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:34:49.574 [2024-07-14 09:44:33.724579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.574 [2024-07-14 09:44:33.724606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.574 qpair failed and we were unable to recover it. 00:34:49.574 [2024-07-14 09:44:33.724776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.574 [2024-07-14 09:44:33.724804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.574 qpair failed and we were unable to recover it. 00:34:49.574 09:44:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:49.574 [2024-07-14 09:44:33.725002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.574 [2024-07-14 09:44:33.725031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.574 qpair failed and we were unable to recover it. 00:34:49.574 09:44:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:49.574 [2024-07-14 09:44:33.725217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.574 [2024-07-14 09:44:33.725250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.574 qpair failed and we were unable to recover it. 00:34:49.574 09:44:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:49.574 [2024-07-14 09:44:33.725459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.574 [2024-07-14 09:44:33.725488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.574 qpair failed and we were unable to recover it. 00:34:49.574 [2024-07-14 09:44:33.725696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.574 [2024-07-14 09:44:33.725722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.574 qpair failed and we were unable to recover it. 00:34:49.574 [2024-07-14 09:44:33.725904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.574 [2024-07-14 09:44:33.725930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.574 qpair failed and we were unable to recover it. 00:34:49.574 [2024-07-14 09:44:33.726098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.574 [2024-07-14 09:44:33.726126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.574 qpair failed and we were unable to recover it. 00:34:49.574 [2024-07-14 09:44:33.726338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.574 [2024-07-14 09:44:33.726364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.574 qpair failed and we were unable to recover it. 00:34:49.574 [2024-07-14 09:44:33.726546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.574 [2024-07-14 09:44:33.726573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.574 qpair failed and we were unable to recover it. 00:34:49.574 [2024-07-14 09:44:33.726762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.574 [2024-07-14 09:44:33.726789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.574 qpair failed and we were unable to recover it. 00:34:49.575 [2024-07-14 09:44:33.726960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.575 [2024-07-14 09:44:33.726988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.575 qpair failed and we were unable to recover it. 00:34:49.575 [2024-07-14 09:44:33.727176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.575 [2024-07-14 09:44:33.727203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.575 qpair failed and we were unable to recover it. 00:34:49.575 [2024-07-14 09:44:33.727388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.575 [2024-07-14 09:44:33.727415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.575 qpair failed and we were unable to recover it. 00:34:49.575 [2024-07-14 09:44:33.727601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.575 [2024-07-14 09:44:33.727627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.575 qpair failed and we were unable to recover it. 00:34:49.575 [2024-07-14 09:44:33.727812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.575 [2024-07-14 09:44:33.727839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.575 qpair failed and we were unable to recover it. 00:34:49.575 [2024-07-14 09:44:33.728031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.575 [2024-07-14 09:44:33.728063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.575 qpair failed and we were unable to recover it. 00:34:49.575 [2024-07-14 09:44:33.728223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.575 [2024-07-14 09:44:33.728252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.575 qpair failed and we were unable to recover it. 00:34:49.575 [2024-07-14 09:44:33.728437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.575 [2024-07-14 09:44:33.728465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.575 qpair failed and we were unable to recover it. 00:34:49.575 [2024-07-14 09:44:33.728618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.575 [2024-07-14 09:44:33.728645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.575 qpair failed and we were unable to recover it. 00:34:49.575 [2024-07-14 09:44:33.728849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.575 [2024-07-14 09:44:33.728882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.575 qpair failed and we were unable to recover it. 00:34:49.575 [2024-07-14 09:44:33.729075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.575 [2024-07-14 09:44:33.729101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.575 qpair failed and we were unable to recover it. 00:34:49.575 [2024-07-14 09:44:33.729319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.575 [2024-07-14 09:44:33.729345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.575 qpair failed and we were unable to recover it. 00:34:49.575 [2024-07-14 09:44:33.729503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.575 [2024-07-14 09:44:33.729530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.575 qpair failed and we were unable to recover it. 00:34:49.575 [2024-07-14 09:44:33.729689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.575 [2024-07-14 09:44:33.729715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.575 qpair failed and we were unable to recover it. 00:34:49.575 [2024-07-14 09:44:33.729883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.575 [2024-07-14 09:44:33.729910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.575 qpair failed and we were unable to recover it. 00:34:49.575 [2024-07-14 09:44:33.730105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.575 [2024-07-14 09:44:33.730131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.575 qpair failed and we were unable to recover it. 00:34:49.575 [2024-07-14 09:44:33.730335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.575 [2024-07-14 09:44:33.730361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.575 qpair failed and we were unable to recover it. 00:34:49.575 [2024-07-14 09:44:33.730543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.575 [2024-07-14 09:44:33.730568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.575 qpair failed and we were unable to recover it. 00:34:49.575 [2024-07-14 09:44:33.730773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.575 [2024-07-14 09:44:33.730813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.575 qpair failed and we were unable to recover it. 00:34:49.575 [2024-07-14 09:44:33.731034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.575 [2024-07-14 09:44:33.731063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.575 qpair failed and we were unable to recover it. 00:34:49.575 [2024-07-14 09:44:33.731257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.575 [2024-07-14 09:44:33.731285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.575 qpair failed and we were unable to recover it. 00:34:49.575 [2024-07-14 09:44:33.731451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.575 [2024-07-14 09:44:33.731478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.575 qpair failed and we were unable to recover it. 00:34:49.575 [2024-07-14 09:44:33.731696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.575 [2024-07-14 09:44:33.731723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.575 qpair failed and we were unable to recover it. 00:34:49.575 [2024-07-14 09:44:33.731888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.575 [2024-07-14 09:44:33.731923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.575 qpair failed and we were unable to recover it. 00:34:49.575 [2024-07-14 09:44:33.732149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.575 [2024-07-14 09:44:33.732176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.575 qpair failed and we were unable to recover it. 00:34:49.575 [2024-07-14 09:44:33.732361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.575 [2024-07-14 09:44:33.732389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.575 qpair failed and we were unable to recover it. 00:34:49.575 [2024-07-14 09:44:33.732584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.575 [2024-07-14 09:44:33.732611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.575 qpair failed and we were unable to recover it. 00:34:49.575 [2024-07-14 09:44:33.732768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.575 [2024-07-14 09:44:33.732795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.575 qpair failed and we were unable to recover it. 00:34:49.575 [2024-07-14 09:44:33.732962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.575 [2024-07-14 09:44:33.732990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.575 qpair failed and we were unable to recover it. 00:34:49.575 [2024-07-14 09:44:33.733196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.575 [2024-07-14 09:44:33.733222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.575 qpair failed and we were unable to recover it. 00:34:49.575 [2024-07-14 09:44:33.733412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.575 [2024-07-14 09:44:33.733439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.575 qpair failed and we were unable to recover it. 00:34:49.575 [2024-07-14 09:44:33.733657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.575 [2024-07-14 09:44:33.733684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.575 qpair failed and we were unable to recover it. 00:34:49.575 [2024-07-14 09:44:33.733900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.575 [2024-07-14 09:44:33.733936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.575 qpair failed and we were unable to recover it. 00:34:49.575 [2024-07-14 09:44:33.734130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.575 [2024-07-14 09:44:33.734159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.575 qpair failed and we were unable to recover it. 00:34:49.575 [2024-07-14 09:44:33.734326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.575 [2024-07-14 09:44:33.734354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.575 qpair failed and we were unable to recover it. 00:34:49.575 [2024-07-14 09:44:33.734538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.575 [2024-07-14 09:44:33.734563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.575 qpair failed and we were unable to recover it. 00:34:49.575 [2024-07-14 09:44:33.734732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.575 [2024-07-14 09:44:33.734758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.575 qpair failed and we were unable to recover it. 00:34:49.575 [2024-07-14 09:44:33.734937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.575 [2024-07-14 09:44:33.734965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.575 qpair failed and we were unable to recover it. 00:34:49.575 [2024-07-14 09:44:33.735125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.575 [2024-07-14 09:44:33.735153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.575 qpair failed and we were unable to recover it. 00:34:49.575 [2024-07-14 09:44:33.735337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.575 [2024-07-14 09:44:33.735364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.575 qpair failed and we were unable to recover it. 00:34:49.575 [2024-07-14 09:44:33.735528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.575 [2024-07-14 09:44:33.735554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.575 qpair failed and we were unable to recover it. 00:34:49.576 [2024-07-14 09:44:33.735713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.576 [2024-07-14 09:44:33.735739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.576 qpair failed and we were unable to recover it. 00:34:49.576 [2024-07-14 09:44:33.735903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.576 [2024-07-14 09:44:33.735931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.576 qpair failed and we were unable to recover it. 00:34:49.576 [2024-07-14 09:44:33.736085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.576 [2024-07-14 09:44:33.736110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.576 qpair failed and we were unable to recover it. 00:34:49.576 [2024-07-14 09:44:33.736318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.576 [2024-07-14 09:44:33.736344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.576 qpair failed and we were unable to recover it. 00:34:49.576 [2024-07-14 09:44:33.736559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.576 [2024-07-14 09:44:33.736590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.576 qpair failed and we were unable to recover it. 00:34:49.576 [2024-07-14 09:44:33.736780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.576 [2024-07-14 09:44:33.736806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.576 qpair failed and we were unable to recover it. 00:34:49.576 [2024-07-14 09:44:33.736975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.576 [2024-07-14 09:44:33.737001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.576 qpair failed and we were unable to recover it. 00:34:49.576 [2024-07-14 09:44:33.737194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.576 [2024-07-14 09:44:33.737220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.576 qpair failed and we were unable to recover it. 00:34:49.576 [2024-07-14 09:44:33.737409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.576 [2024-07-14 09:44:33.737435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.576 qpair failed and we were unable to recover it. 00:34:49.576 [2024-07-14 09:44:33.737607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.576 [2024-07-14 09:44:33.737634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.576 qpair failed and we were unable to recover it. 00:34:49.576 [2024-07-14 09:44:33.737827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.576 [2024-07-14 09:44:33.737852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.576 qpair failed and we were unable to recover it. 00:34:49.576 [2024-07-14 09:44:33.738025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.576 [2024-07-14 09:44:33.738051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.576 qpair failed and we were unable to recover it. 00:34:49.576 [2024-07-14 09:44:33.738218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.576 [2024-07-14 09:44:33.738245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.576 qpair failed and we were unable to recover it. 00:34:49.576 [2024-07-14 09:44:33.738432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.576 [2024-07-14 09:44:33.738459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.576 qpair failed and we were unable to recover it. 00:34:49.576 [2024-07-14 09:44:33.738640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.576 [2024-07-14 09:44:33.738666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.576 qpair failed and we were unable to recover it. 00:34:49.576 [2024-07-14 09:44:33.738831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.576 [2024-07-14 09:44:33.738857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.576 qpair failed and we were unable to recover it. 00:34:49.576 [2024-07-14 09:44:33.739110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.576 [2024-07-14 09:44:33.739136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.576 qpair failed and we were unable to recover it. 00:34:49.576 [2024-07-14 09:44:33.739304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.576 [2024-07-14 09:44:33.739330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.576 qpair failed and we were unable to recover it. 00:34:49.576 [2024-07-14 09:44:33.739490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.576 [2024-07-14 09:44:33.739516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.576 qpair failed and we were unable to recover it. 00:34:49.576 [2024-07-14 09:44:33.739686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.576 [2024-07-14 09:44:33.739712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.576 qpair failed and we were unable to recover it. 00:34:49.576 [2024-07-14 09:44:33.739876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.576 [2024-07-14 09:44:33.739903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.576 qpair failed and we were unable to recover it. 00:34:49.576 [2024-07-14 09:44:33.740088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.576 [2024-07-14 09:44:33.740115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.576 qpair failed and we were unable to recover it. 00:34:49.576 [2024-07-14 09:44:33.740271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.576 [2024-07-14 09:44:33.740297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.576 qpair failed and we were unable to recover it. 00:34:49.576 [2024-07-14 09:44:33.740486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.576 [2024-07-14 09:44:33.740512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.576 qpair failed and we were unable to recover it. 00:34:49.576 [2024-07-14 09:44:33.740670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.576 [2024-07-14 09:44:33.740696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.576 qpair failed and we were unable to recover it. 00:34:49.576 [2024-07-14 09:44:33.740883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.576 [2024-07-14 09:44:33.740910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.576 qpair failed and we were unable to recover it. 00:34:49.576 [2024-07-14 09:44:33.741093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.576 [2024-07-14 09:44:33.741118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.576 qpair failed and we were unable to recover it. 00:34:49.576 [2024-07-14 09:44:33.741309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.576 [2024-07-14 09:44:33.741334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.576 qpair failed and we were unable to recover it. 00:34:49.576 [2024-07-14 09:44:33.741525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.576 [2024-07-14 09:44:33.741551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.576 qpair failed and we were unable to recover it. 00:34:49.576 [2024-07-14 09:44:33.741737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.576 [2024-07-14 09:44:33.741763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.576 qpair failed and we were unable to recover it. 00:34:49.576 [2024-07-14 09:44:33.741945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.576 [2024-07-14 09:44:33.741971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.576 qpair failed and we were unable to recover it. 00:34:49.576 [2024-07-14 09:44:33.742183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.576 [2024-07-14 09:44:33.742223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.576 qpair failed and we were unable to recover it. 00:34:49.576 [2024-07-14 09:44:33.742413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.576 [2024-07-14 09:44:33.742441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.576 qpair failed and we were unable to recover it. 00:34:49.576 [2024-07-14 09:44:33.742629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.576 [2024-07-14 09:44:33.742656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.576 qpair failed and we were unable to recover it. 00:34:49.576 [2024-07-14 09:44:33.742843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.576 [2024-07-14 09:44:33.742876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.576 qpair failed and we were unable to recover it. 00:34:49.576 [2024-07-14 09:44:33.743036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.576 [2024-07-14 09:44:33.743062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.576 qpair failed and we were unable to recover it. 00:34:49.576 [2024-07-14 09:44:33.743237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.577 [2024-07-14 09:44:33.743264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.577 qpair failed and we were unable to recover it. 00:34:49.577 [2024-07-14 09:44:33.743432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.577 [2024-07-14 09:44:33.743460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.577 qpair failed and we were unable to recover it. 00:34:49.577 [2024-07-14 09:44:33.743619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.577 [2024-07-14 09:44:33.743646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.577 qpair failed and we were unable to recover it. 00:34:49.577 [2024-07-14 09:44:33.743833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.577 [2024-07-14 09:44:33.743860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.577 qpair failed and we were unable to recover it. 00:34:49.577 [2024-07-14 09:44:33.744044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.577 [2024-07-14 09:44:33.744070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.577 qpair failed and we were unable to recover it. 00:34:49.577 [2024-07-14 09:44:33.744256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.577 [2024-07-14 09:44:33.744283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.577 qpair failed and we were unable to recover it. 00:34:49.577 [2024-07-14 09:44:33.744470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.577 [2024-07-14 09:44:33.744498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.577 qpair failed and we were unable to recover it. 00:34:49.577 [2024-07-14 09:44:33.744662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.577 [2024-07-14 09:44:33.744689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.577 qpair failed and we were unable to recover it. 00:34:49.577 [2024-07-14 09:44:33.744879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.577 [2024-07-14 09:44:33.744912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.577 qpair failed and we were unable to recover it. 00:34:49.577 [2024-07-14 09:44:33.745125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.577 [2024-07-14 09:44:33.745151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.577 qpair failed and we were unable to recover it. 00:34:49.577 [2024-07-14 09:44:33.745306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.577 [2024-07-14 09:44:33.745333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.577 qpair failed and we were unable to recover it. 00:34:49.577 [2024-07-14 09:44:33.745525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.577 [2024-07-14 09:44:33.745552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.577 qpair failed and we were unable to recover it. 00:34:49.577 [2024-07-14 09:44:33.745762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.577 [2024-07-14 09:44:33.745789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1660000b90 with addr=10.0.0.2, port=4420 00:34:49.577 qpair failed and we were unable to recover it. 00:34:49.577 [2024-07-14 09:44:33.745950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.577 [2024-07-14 09:44:33.745978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.577 qpair failed and we were unable to recover it. 00:34:49.577 [2024-07-14 09:44:33.746163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.577 [2024-07-14 09:44:33.746189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.577 qpair failed and we were unable to recover it. 00:34:49.577 [2024-07-14 09:44:33.746387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.577 [2024-07-14 09:44:33.746413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.577 qpair failed and we were unable to recover it. 00:34:49.577 [2024-07-14 09:44:33.746606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.577 [2024-07-14 09:44:33.746632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.577 qpair failed and we were unable to recover it. 00:34:49.577 [2024-07-14 09:44:33.746843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.577 [2024-07-14 09:44:33.746875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.577 qpair failed and we were unable to recover it. 00:34:49.577 [2024-07-14 09:44:33.747061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.577 [2024-07-14 09:44:33.747088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.577 qpair failed and we were unable to recover it. 00:34:49.577 [2024-07-14 09:44:33.747281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.577 [2024-07-14 09:44:33.747307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.577 qpair failed and we were unable to recover it. 00:34:49.577 [2024-07-14 09:44:33.747458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.577 [2024-07-14 09:44:33.747483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.577 qpair failed and we were unable to recover it. 00:34:49.577 [2024-07-14 09:44:33.747648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.577 [2024-07-14 09:44:33.747673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.577 qpair failed and we were unable to recover it. 00:34:49.577 [2024-07-14 09:44:33.747846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.577 [2024-07-14 09:44:33.747883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.577 qpair failed and we were unable to recover it. 00:34:49.577 [2024-07-14 09:44:33.748071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.577 [2024-07-14 09:44:33.748098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.577 qpair failed and we were unable to recover it. 00:34:49.577 [2024-07-14 09:44:33.748281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.577 [2024-07-14 09:44:33.748307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.577 qpair failed and we were unable to recover it. 00:34:49.577 [2024-07-14 09:44:33.748476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.577 [2024-07-14 09:44:33.748504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.577 qpair failed and we were unable to recover it. 00:34:49.577 [2024-07-14 09:44:33.748691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.577 [2024-07-14 09:44:33.748717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.577 qpair failed and we were unable to recover it. 00:34:49.577 [2024-07-14 09:44:33.748915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.577 [2024-07-14 09:44:33.748943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.577 qpair failed and we were unable to recover it. 00:34:49.577 [2024-07-14 09:44:33.749135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.577 [2024-07-14 09:44:33.749161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.577 qpair failed and we were unable to recover it. 00:34:49.577 [2024-07-14 09:44:33.749340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.577 [2024-07-14 09:44:33.749366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.577 qpair failed and we were unable to recover it. 00:34:49.577 09:44:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:49.577 [2024-07-14 09:44:33.749570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.577 [2024-07-14 09:44:33.749598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.577 qpair failed and we were unable to recover it. 00:34:49.577 [2024-07-14 09:44:33.749758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.577 [2024-07-14 09:44:33.749783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.577 qpair failed and we were unable to recover it. 00:34:49.577 09:44:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:49.577 [2024-07-14 09:44:33.749974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.577 [2024-07-14 09:44:33.750000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.577 qpair failed and we were unable to recover it. 00:34:49.577 09:44:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.577 [2024-07-14 09:44:33.750217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.577 [2024-07-14 09:44:33.750245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.577 qpair failed and we were unable to recover it. 00:34:49.577 09:44:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:49.577 [2024-07-14 09:44:33.750439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.577 [2024-07-14 09:44:33.750466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.577 qpair failed and we were unable to recover it. 00:34:49.577 [2024-07-14 09:44:33.750625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.577 [2024-07-14 09:44:33.750651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.577 qpair failed and we were unable to recover it. 00:34:49.577 [2024-07-14 09:44:33.750816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.577 [2024-07-14 09:44:33.750842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.577 qpair failed and we were unable to recover it. 00:34:49.577 [2024-07-14 09:44:33.751017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.578 [2024-07-14 09:44:33.751043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.578 qpair failed and we were unable to recover it. 00:34:49.578 [2024-07-14 09:44:33.751228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.578 [2024-07-14 09:44:33.751254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.578 qpair failed and we were unable to recover it. 00:34:49.578 [2024-07-14 09:44:33.751444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.578 [2024-07-14 09:44:33.751470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.578 qpair failed and we were unable to recover it. 00:34:49.578 [2024-07-14 09:44:33.751681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.578 [2024-07-14 09:44:33.751706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.578 qpair failed and we were unable to recover it. 00:34:49.578 [2024-07-14 09:44:33.751899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.578 [2024-07-14 09:44:33.751936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.578 qpair failed and we were unable to recover it. 00:34:49.578 [2024-07-14 09:44:33.752118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.578 [2024-07-14 09:44:33.752144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.578 qpair failed and we were unable to recover it. 00:34:49.578 [2024-07-14 09:44:33.752332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.578 [2024-07-14 09:44:33.752358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.578 qpair failed and we were unable to recover it. 00:34:49.578 [2024-07-14 09:44:33.752516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.578 [2024-07-14 09:44:33.752544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.578 qpair failed and we were unable to recover it. 00:34:49.578 [2024-07-14 09:44:33.752728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.578 [2024-07-14 09:44:33.752753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.578 qpair failed and we were unable to recover it. 00:34:49.578 [2024-07-14 09:44:33.752965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.578 [2024-07-14 09:44:33.752992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.578 qpair failed and we were unable to recover it. 00:34:49.578 [2024-07-14 09:44:33.753158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.578 [2024-07-14 09:44:33.753184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.578 qpair failed and we were unable to recover it. 00:34:49.578 [2024-07-14 09:44:33.753399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.578 [2024-07-14 09:44:33.753424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.578 qpair failed and we were unable to recover it. 00:34:49.578 [2024-07-14 09:44:33.753589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.578 [2024-07-14 09:44:33.753616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.578 qpair failed and we were unable to recover it. 00:34:49.578 [2024-07-14 09:44:33.753910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.578 [2024-07-14 09:44:33.753947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.578 qpair failed and we were unable to recover it. 00:34:49.578 [2024-07-14 09:44:33.754137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.578 [2024-07-14 09:44:33.754163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.578 qpair failed and we were unable to recover it. 00:34:49.578 [2024-07-14 09:44:33.754334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.578 [2024-07-14 09:44:33.754360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.578 qpair failed and we were unable to recover it. 00:34:49.578 [2024-07-14 09:44:33.754516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.578 [2024-07-14 09:44:33.754541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.578 qpair failed and we were unable to recover it. 00:34:49.578 [2024-07-14 09:44:33.754728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.578 [2024-07-14 09:44:33.754753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.578 qpair failed and we were unable to recover it. 00:34:49.578 [2024-07-14 09:44:33.754933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.578 [2024-07-14 09:44:33.754960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.578 qpair failed and we were unable to recover it. 00:34:49.578 [2024-07-14 09:44:33.755137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.578 [2024-07-14 09:44:33.755163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.578 qpair failed and we were unable to recover it. 00:34:49.578 [2024-07-14 09:44:33.755314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.578 [2024-07-14 09:44:33.755339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.578 qpair failed and we were unable to recover it. 00:34:49.578 [2024-07-14 09:44:33.755496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.578 [2024-07-14 09:44:33.755521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.578 qpair failed and we were unable to recover it. 00:34:49.578 [2024-07-14 09:44:33.755737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.578 [2024-07-14 09:44:33.755763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.578 qpair failed and we were unable to recover it. 00:34:49.578 [2024-07-14 09:44:33.755943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.578 [2024-07-14 09:44:33.755969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.578 qpair failed and we were unable to recover it. 00:34:49.578 [2024-07-14 09:44:33.756164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.578 [2024-07-14 09:44:33.756190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.578 qpair failed and we were unable to recover it. 00:34:49.578 [2024-07-14 09:44:33.756362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.578 [2024-07-14 09:44:33.756388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.578 qpair failed and we were unable to recover it. 00:34:49.578 [2024-07-14 09:44:33.756716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.578 [2024-07-14 09:44:33.756742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.578 qpair failed and we were unable to recover it. 00:34:49.578 [2024-07-14 09:44:33.756961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.578 [2024-07-14 09:44:33.756987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.578 qpair failed and we were unable to recover it. 00:34:49.578 [2024-07-14 09:44:33.757173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.578 [2024-07-14 09:44:33.757198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.578 qpair failed and we were unable to recover it. 00:34:49.578 [2024-07-14 09:44:33.757355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.578 [2024-07-14 09:44:33.757380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.578 qpair failed and we were unable to recover it. 00:34:49.578 [2024-07-14 09:44:33.757563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.578 [2024-07-14 09:44:33.757589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.578 qpair failed and we were unable to recover it. 00:34:49.578 [2024-07-14 09:44:33.757769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.578 [2024-07-14 09:44:33.757795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.578 qpair failed and we were unable to recover it. 00:34:49.578 [2024-07-14 09:44:33.757976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.578 [2024-07-14 09:44:33.758002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.578 qpair failed and we were unable to recover it. 00:34:49.578 [2024-07-14 09:44:33.758169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.578 [2024-07-14 09:44:33.758196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.578 qpair failed and we were unable to recover it. 00:34:49.578 [2024-07-14 09:44:33.758379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.578 [2024-07-14 09:44:33.758405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.578 qpair failed and we were unable to recover it. 00:34:49.578 [2024-07-14 09:44:33.758670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.578 [2024-07-14 09:44:33.758695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.578 qpair failed and we were unable to recover it. 00:34:49.578 [2024-07-14 09:44:33.758884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.578 [2024-07-14 09:44:33.758915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.578 qpair failed and we were unable to recover it. 00:34:49.578 [2024-07-14 09:44:33.759136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.578 [2024-07-14 09:44:33.759162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.578 qpair failed and we were unable to recover it. 00:34:49.578 [2024-07-14 09:44:33.759329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.578 [2024-07-14 09:44:33.759355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.578 qpair failed and we were unable to recover it. 00:34:49.578 [2024-07-14 09:44:33.759688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.578 [2024-07-14 09:44:33.759713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.578 qpair failed and we were unable to recover it. 00:34:49.578 [2024-07-14 09:44:33.759922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.578 [2024-07-14 09:44:33.759948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.578 qpair failed and we were unable to recover it. 00:34:49.578 [2024-07-14 09:44:33.760127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.579 [2024-07-14 09:44:33.760154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.579 qpair failed and we were unable to recover it. 00:34:49.579 [2024-07-14 09:44:33.760341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.579 [2024-07-14 09:44:33.760367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.579 qpair failed and we were unable to recover it. 00:34:49.579 [2024-07-14 09:44:33.760522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.579 [2024-07-14 09:44:33.760548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.579 qpair failed and we were unable to recover it. 00:34:49.579 [2024-07-14 09:44:33.760744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.579 [2024-07-14 09:44:33.760770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.579 qpair failed and we were unable to recover it. 00:34:49.579 [2024-07-14 09:44:33.760964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.579 [2024-07-14 09:44:33.760990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.579 qpair failed and we were unable to recover it. 00:34:49.579 [2024-07-14 09:44:33.761191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.579 [2024-07-14 09:44:33.761217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.579 qpair failed and we were unable to recover it. 00:34:49.579 [2024-07-14 09:44:33.761405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.579 [2024-07-14 09:44:33.761431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.579 qpair failed and we were unable to recover it. 00:34:49.579 [2024-07-14 09:44:33.761628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.579 [2024-07-14 09:44:33.761654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.579 qpair failed and we were unable to recover it. 00:34:49.579 [2024-07-14 09:44:33.761842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.579 [2024-07-14 09:44:33.761874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.579 qpair failed and we were unable to recover it. 00:34:49.579 [2024-07-14 09:44:33.762052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.579 [2024-07-14 09:44:33.762078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.579 qpair failed and we were unable to recover it. 00:34:49.579 [2024-07-14 09:44:33.762252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.579 [2024-07-14 09:44:33.762277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.579 qpair failed and we were unable to recover it. 00:34:49.579 [2024-07-14 09:44:33.762442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.579 [2024-07-14 09:44:33.762468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.579 qpair failed and we were unable to recover it. 00:34:49.579 [2024-07-14 09:44:33.762627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.579 [2024-07-14 09:44:33.762654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.579 qpair failed and we were unable to recover it. 00:34:49.579 [2024-07-14 09:44:33.762839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.579 [2024-07-14 09:44:33.762872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.579 qpair failed and we were unable to recover it. 00:34:49.579 [2024-07-14 09:44:33.763065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.579 [2024-07-14 09:44:33.763091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.579 qpair failed and we were unable to recover it. 00:34:49.579 [2024-07-14 09:44:33.763280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.579 [2024-07-14 09:44:33.763306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.579 qpair failed and we were unable to recover it. 00:34:49.579 [2024-07-14 09:44:33.763617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.579 [2024-07-14 09:44:33.763657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.579 qpair failed and we were unable to recover it. 00:34:49.579 [2024-07-14 09:44:33.763857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.579 [2024-07-14 09:44:33.763889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.579 qpair failed and we were unable to recover it. 00:34:49.579 [2024-07-14 09:44:33.764061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.579 [2024-07-14 09:44:33.764087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.579 qpair failed and we were unable to recover it. 00:34:49.579 [2024-07-14 09:44:33.764280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.579 [2024-07-14 09:44:33.764308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.579 qpair failed and we were unable to recover it. 00:34:49.579 [2024-07-14 09:44:33.764473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.579 [2024-07-14 09:44:33.764498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.579 qpair failed and we were unable to recover it. 00:34:49.579 [2024-07-14 09:44:33.764691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.579 [2024-07-14 09:44:33.764717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.579 qpair failed and we were unable to recover it. 00:34:49.579 [2024-07-14 09:44:33.764887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.579 [2024-07-14 09:44:33.764920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.579 qpair failed and we were unable to recover it. 00:34:49.579 [2024-07-14 09:44:33.765108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.579 [2024-07-14 09:44:33.765133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.579 qpair failed and we were unable to recover it. 00:34:49.579 [2024-07-14 09:44:33.765285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.579 [2024-07-14 09:44:33.765311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.579 qpair failed and we were unable to recover it. 00:34:49.579 [2024-07-14 09:44:33.765500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.579 [2024-07-14 09:44:33.765527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.579 qpair failed and we were unable to recover it. 00:34:49.579 [2024-07-14 09:44:33.765695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.579 [2024-07-14 09:44:33.765721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.579 qpair failed and we were unable to recover it. 00:34:49.579 [2024-07-14 09:44:33.765903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.579 [2024-07-14 09:44:33.765929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.579 qpair failed and we were unable to recover it. 00:34:49.579 [2024-07-14 09:44:33.766122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.579 [2024-07-14 09:44:33.766147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.579 qpair failed and we were unable to recover it. 00:34:49.579 [2024-07-14 09:44:33.766359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.579 [2024-07-14 09:44:33.766385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.579 qpair failed and we were unable to recover it. 00:34:49.579 [2024-07-14 09:44:33.766553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.579 [2024-07-14 09:44:33.766579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.579 qpair failed and we were unable to recover it. 00:34:49.579 [2024-07-14 09:44:33.766789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.579 [2024-07-14 09:44:33.766814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.579 qpair failed and we were unable to recover it. 00:34:49.579 [2024-07-14 09:44:33.767003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.579 [2024-07-14 09:44:33.767029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.579 qpair failed and we were unable to recover it. 00:34:49.579 [2024-07-14 09:44:33.767208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.579 [2024-07-14 09:44:33.767234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.579 qpair failed and we were unable to recover it. 00:34:49.579 [2024-07-14 09:44:33.767396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.579 [2024-07-14 09:44:33.767421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.579 qpair failed and we were unable to recover it. 00:34:49.579 [2024-07-14 09:44:33.767645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.579 [2024-07-14 09:44:33.767675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.579 qpair failed and we were unable to recover it. 00:34:49.579 [2024-07-14 09:44:33.767864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.579 [2024-07-14 09:44:33.767896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.579 qpair failed and we were unable to recover it. 00:34:49.579 [2024-07-14 09:44:33.768063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.579 [2024-07-14 09:44:33.768088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.579 qpair failed and we were unable to recover it. 00:34:49.579 [2024-07-14 09:44:33.768264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.579 [2024-07-14 09:44:33.768290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.579 qpair failed and we were unable to recover it. 00:34:49.579 [2024-07-14 09:44:33.768454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.579 [2024-07-14 09:44:33.768480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.579 qpair failed and we were unable to recover it. 00:34:49.579 [2024-07-14 09:44:33.768645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.579 [2024-07-14 09:44:33.768671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.579 qpair failed and we were unable to recover it. 00:34:49.579 [2024-07-14 09:44:33.768850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.580 [2024-07-14 09:44:33.768883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.580 qpair failed and we were unable to recover it. 00:34:49.580 [2024-07-14 09:44:33.769080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.580 [2024-07-14 09:44:33.769106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.580 qpair failed and we were unable to recover it. 00:34:49.580 [2024-07-14 09:44:33.769281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.580 [2024-07-14 09:44:33.769307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.580 qpair failed and we were unable to recover it. 00:34:49.580 [2024-07-14 09:44:33.769498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.580 [2024-07-14 09:44:33.769523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.580 qpair failed and we were unable to recover it. 00:34:49.580 [2024-07-14 09:44:33.769724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.580 [2024-07-14 09:44:33.769749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.580 qpair failed and we were unable to recover it. 00:34:49.580 [2024-07-14 09:44:33.769963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.580 [2024-07-14 09:44:33.769989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.580 qpair failed and we were unable to recover it. 00:34:49.580 [2024-07-14 09:44:33.770155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.580 [2024-07-14 09:44:33.770181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.580 qpair failed and we were unable to recover it. 00:34:49.580 [2024-07-14 09:44:33.770371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.580 [2024-07-14 09:44:33.770398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.580 qpair failed and we were unable to recover it. 00:34:49.580 [2024-07-14 09:44:33.770570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.580 [2024-07-14 09:44:33.770597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.580 qpair failed and we were unable to recover it. 00:34:49.580 [2024-07-14 09:44:33.770767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.580 [2024-07-14 09:44:33.770793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.580 qpair failed and we were unable to recover it. 00:34:49.580 [2024-07-14 09:44:33.770981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.580 [2024-07-14 09:44:33.771007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.580 qpair failed and we were unable to recover it. 00:34:49.580 [2024-07-14 09:44:33.771197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.580 [2024-07-14 09:44:33.771222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.580 qpair failed and we were unable to recover it. 00:34:49.580 [2024-07-14 09:44:33.771380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.580 [2024-07-14 09:44:33.771407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.580 qpair failed and we were unable to recover it. 00:34:49.580 [2024-07-14 09:44:33.771561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.580 [2024-07-14 09:44:33.771588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.580 qpair failed and we were unable to recover it. 00:34:49.580 [2024-07-14 09:44:33.771803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.580 [2024-07-14 09:44:33.771828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.580 qpair failed and we were unable to recover it. 00:34:49.580 [2024-07-14 09:44:33.772011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.580 [2024-07-14 09:44:33.772037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.580 qpair failed and we were unable to recover it. 00:34:49.580 [2024-07-14 09:44:33.772205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.580 [2024-07-14 09:44:33.772233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.580 qpair failed and we were unable to recover it. 00:34:49.580 [2024-07-14 09:44:33.772428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.580 [2024-07-14 09:44:33.772454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.580 qpair failed and we were unable to recover it. 00:34:49.580 [2024-07-14 09:44:33.772662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.580 [2024-07-14 09:44:33.772687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.580 qpair failed and we were unable to recover it. 00:34:49.580 [2024-07-14 09:44:33.772847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.580 [2024-07-14 09:44:33.772878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.580 qpair failed and we were unable to recover it. 00:34:49.580 [2024-07-14 09:44:33.773077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.580 [2024-07-14 09:44:33.773103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.580 qpair failed and we were unable to recover it. 00:34:49.580 [2024-07-14 09:44:33.773296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.580 [2024-07-14 09:44:33.773321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.580 qpair failed and we were unable to recover it. 00:34:49.580 [2024-07-14 09:44:33.773515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.580 [2024-07-14 09:44:33.773542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.580 qpair failed and we were unable to recover it. 00:34:49.580 [2024-07-14 09:44:33.773720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.580 [2024-07-14 09:44:33.773745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.580 qpair failed and we were unable to recover it. 00:34:49.580 [2024-07-14 09:44:33.773935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.580 [2024-07-14 09:44:33.773961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.580 qpair failed and we were unable to recover it. 00:34:49.580 [2024-07-14 09:44:33.774169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.580 [2024-07-14 09:44:33.774195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.580 qpair failed and we were unable to recover it. 00:34:49.580 [2024-07-14 09:44:33.774382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.580 [2024-07-14 09:44:33.774408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.580 qpair failed and we were unable to recover it. 00:34:49.580 Malloc0 00:34:49.580 [2024-07-14 09:44:33.774565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.580 [2024-07-14 09:44:33.774590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.580 qpair failed and we were unable to recover it. 00:34:49.580 [2024-07-14 09:44:33.774752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.580 [2024-07-14 09:44:33.774779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.580 qpair failed and we were unable to recover it. 00:34:49.580 09:44:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.580 [2024-07-14 09:44:33.774976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.580 [2024-07-14 09:44:33.775002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.580 qpair failed and we were unable to recover it. 00:34:49.580 09:44:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:49.580 09:44:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.580 [2024-07-14 09:44:33.775206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.580 [2024-07-14 09:44:33.775232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.580 qpair failed and we were unable to recover it. 00:34:49.580 09:44:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:49.580 [2024-07-14 09:44:33.775423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.580 [2024-07-14 09:44:33.775449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.580 qpair failed and we were unable to recover it. 00:34:49.580 [2024-07-14 09:44:33.775634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.580 [2024-07-14 09:44:33.775660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.580 qpair failed and we were unable to recover it. 00:34:49.580 [2024-07-14 09:44:33.775851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.580 [2024-07-14 09:44:33.775883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.580 qpair failed and we were unable to recover it. 00:34:49.580 [2024-07-14 09:44:33.776083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.580 [2024-07-14 09:44:33.776109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.580 qpair failed and we were unable to recover it. 00:34:49.580 [2024-07-14 09:44:33.776295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.580 [2024-07-14 09:44:33.776321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.580 qpair failed and we were unable to recover it. 00:34:49.580 [2024-07-14 09:44:33.776498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.580 [2024-07-14 09:44:33.776524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.580 qpair failed and we were unable to recover it. 00:34:49.580 [2024-07-14 09:44:33.776692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.580 [2024-07-14 09:44:33.776717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.580 qpair failed and we were unable to recover it. 00:34:49.580 [2024-07-14 09:44:33.776914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.580 [2024-07-14 09:44:33.776940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.580 qpair failed and we were unable to recover it. 00:34:49.580 [2024-07-14 09:44:33.777163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.581 [2024-07-14 09:44:33.777188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.581 qpair failed and we were unable to recover it. 00:34:49.581 [2024-07-14 09:44:33.777354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.581 [2024-07-14 09:44:33.777379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.581 qpair failed and we were unable to recover it. 00:34:49.581 [2024-07-14 09:44:33.777535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.581 [2024-07-14 09:44:33.777560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.581 qpair failed and we were unable to recover it. 00:34:49.581 [2024-07-14 09:44:33.777738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.581 [2024-07-14 09:44:33.777763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.581 qpair failed and we were unable to recover it. 00:34:49.581 [2024-07-14 09:44:33.777936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.581 [2024-07-14 09:44:33.777963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.581 qpair failed and we were unable to recover it. 00:34:49.581 [2024-07-14 09:44:33.778049] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:49.581 [2024-07-14 09:44:33.778158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.581 [2024-07-14 09:44:33.778183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.581 qpair failed and we were unable to recover it. 00:34:49.581 [2024-07-14 09:44:33.778368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.581 [2024-07-14 09:44:33.778394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.581 qpair failed and we were unable to recover it. 00:34:49.581 [2024-07-14 09:44:33.778592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.581 [2024-07-14 09:44:33.778618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.581 qpair failed and we were unable to recover it. 00:34:49.581 [2024-07-14 09:44:33.778797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.581 [2024-07-14 09:44:33.778823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.581 qpair failed and we were unable to recover it. 00:34:49.581 [2024-07-14 09:44:33.778993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.581 [2024-07-14 09:44:33.779019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.581 qpair failed and we were unable to recover it. 00:34:49.581 [2024-07-14 09:44:33.779185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.581 [2024-07-14 09:44:33.779211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.581 qpair failed and we were unable to recover it. 00:34:49.581 [2024-07-14 09:44:33.779365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.581 [2024-07-14 09:44:33.779390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.581 qpair failed and we were unable to recover it. 00:34:49.581 [2024-07-14 09:44:33.779583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.581 [2024-07-14 09:44:33.779608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.581 qpair failed and we were unable to recover it. 00:34:49.581 [2024-07-14 09:44:33.779773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.581 [2024-07-14 09:44:33.779798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.581 qpair failed and we were unable to recover it. 00:34:49.581 [2024-07-14 09:44:33.779971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.581 [2024-07-14 09:44:33.780003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.581 qpair failed and we were unable to recover it. 00:34:49.581 [2024-07-14 09:44:33.780206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.581 [2024-07-14 09:44:33.780232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.581 qpair failed and we were unable to recover it. 00:34:49.581 [2024-07-14 09:44:33.780396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.581 [2024-07-14 09:44:33.780421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.581 qpair failed and we were unable to recover it. 00:34:49.581 [2024-07-14 09:44:33.780616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.581 [2024-07-14 09:44:33.780642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.581 qpair failed and we were unable to recover it. 00:34:49.581 [2024-07-14 09:44:33.780822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.581 [2024-07-14 09:44:33.780847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.581 qpair failed and we were unable to recover it. 00:34:49.581 [2024-07-14 09:44:33.781068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.581 [2024-07-14 09:44:33.781093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.581 qpair failed and we were unable to recover it. 00:34:49.581 [2024-07-14 09:44:33.781321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.581 [2024-07-14 09:44:33.781346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.581 qpair failed and we were unable to recover it. 00:34:49.581 [2024-07-14 09:44:33.781516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.581 [2024-07-14 09:44:33.781543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.581 qpair failed and we were unable to recover it. 00:34:49.581 [2024-07-14 09:44:33.781707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.581 [2024-07-14 09:44:33.781732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.581 qpair failed and we were unable to recover it. 00:34:49.581 [2024-07-14 09:44:33.781909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.581 [2024-07-14 09:44:33.781935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.581 qpair failed and we were unable to recover it. 00:34:49.581 [2024-07-14 09:44:33.782107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.581 [2024-07-14 09:44:33.782133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.581 qpair failed and we were unable to recover it. 00:34:49.581 [2024-07-14 09:44:33.782299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.581 [2024-07-14 09:44:33.782325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.581 qpair failed and we were unable to recover it. 00:34:49.581 [2024-07-14 09:44:33.782540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.581 [2024-07-14 09:44:33.782565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.581 qpair failed and we were unable to recover it. 00:34:49.581 [2024-07-14 09:44:33.782756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.581 [2024-07-14 09:44:33.782781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.581 qpair failed and we were unable to recover it. 00:34:49.581 [2024-07-14 09:44:33.782972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.581 [2024-07-14 09:44:33.783007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.581 qpair failed and we were unable to recover it. 00:34:49.581 [2024-07-14 09:44:33.783178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.581 [2024-07-14 09:44:33.783204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.581 qpair failed and we were unable to recover it. 00:34:49.581 [2024-07-14 09:44:33.783422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.581 [2024-07-14 09:44:33.783448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.581 qpair failed and we were unable to recover it. 00:34:49.581 [2024-07-14 09:44:33.783618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.581 [2024-07-14 09:44:33.783643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.581 qpair failed and we were unable to recover it. 00:34:49.581 [2024-07-14 09:44:33.783803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.581 [2024-07-14 09:44:33.783828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.581 qpair failed and we were unable to recover it. 00:34:49.581 [2024-07-14 09:44:33.784029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.581 [2024-07-14 09:44:33.784059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.581 qpair failed and we were unable to recover it. 00:34:49.581 [2024-07-14 09:44:33.784231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.581 [2024-07-14 09:44:33.784256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.581 qpair failed and we were unable to recover it. 00:34:49.581 [2024-07-14 09:44:33.784410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.581 [2024-07-14 09:44:33.784436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.582 qpair failed and we were unable to recover it. 00:34:49.582 [2024-07-14 09:44:33.784630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.582 [2024-07-14 09:44:33.784655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.582 qpair failed and we were unable to recover it. 00:34:49.582 [2024-07-14 09:44:33.784813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.582 [2024-07-14 09:44:33.784838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.582 qpair failed and we were unable to recover it. 00:34:49.582 [2024-07-14 09:44:33.785019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.582 [2024-07-14 09:44:33.785045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.582 qpair failed and we were unable to recover it. 00:34:49.582 [2024-07-14 09:44:33.785240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.582 [2024-07-14 09:44:33.785266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.582 qpair failed and we were unable to recover it. 00:34:49.582 [2024-07-14 09:44:33.785434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.582 [2024-07-14 09:44:33.785461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.582 qpair failed and we were unable to recover it. 00:34:49.582 [2024-07-14 09:44:33.785683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.582 [2024-07-14 09:44:33.785709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.582 qpair failed and we were unable to recover it. 00:34:49.582 [2024-07-14 09:44:33.785897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.582 [2024-07-14 09:44:33.785934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.582 qpair failed and we were unable to recover it. 00:34:49.582 [2024-07-14 09:44:33.786127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.582 [2024-07-14 09:44:33.786154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.582 qpair failed and we were unable to recover it. 00:34:49.582 09:44:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.582 [2024-07-14 09:44:33.786312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.582 [2024-07-14 09:44:33.786338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.582 qpair failed and we were unable to recover it. 00:34:49.582 09:44:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:49.582 09:44:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.582 [2024-07-14 09:44:33.786545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.582 [2024-07-14 09:44:33.786575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.582 qpair failed and we were unable to recover it. 00:34:49.582 09:44:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:49.582 [2024-07-14 09:44:33.786741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.582 [2024-07-14 09:44:33.786767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.582 qpair failed and we were unable to recover it. 00:34:49.582 [2024-07-14 09:44:33.786945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.582 [2024-07-14 09:44:33.786972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.582 qpair failed and we were unable to recover it. 00:34:49.582 [2024-07-14 09:44:33.787138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.582 [2024-07-14 09:44:33.787165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.582 qpair failed and we were unable to recover it. 00:34:49.582 [2024-07-14 09:44:33.787350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.582 [2024-07-14 09:44:33.787376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.582 qpair failed and we were unable to recover it. 00:34:49.582 [2024-07-14 09:44:33.787566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.582 [2024-07-14 09:44:33.787591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.582 qpair failed and we were unable to recover it. 00:34:49.582 [2024-07-14 09:44:33.787746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.582 [2024-07-14 09:44:33.787771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.582 qpair failed and we were unable to recover it. 00:34:49.582 [2024-07-14 09:44:33.787940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.582 [2024-07-14 09:44:33.787984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.582 qpair failed and we were unable to recover it. 00:34:49.582 [2024-07-14 09:44:33.788147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.582 [2024-07-14 09:44:33.788173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.582 qpair failed and we were unable to recover it. 00:34:49.582 [2024-07-14 09:44:33.788337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.582 [2024-07-14 09:44:33.788363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.582 qpair failed and we were unable to recover it. 00:34:49.582 [2024-07-14 09:44:33.788548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.582 [2024-07-14 09:44:33.788573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.582 qpair failed and we were unable to recover it. 00:34:49.582 [2024-07-14 09:44:33.788760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.582 [2024-07-14 09:44:33.788786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.582 qpair failed and we were unable to recover it. 00:34:49.582 [2024-07-14 09:44:33.788961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.582 [2024-07-14 09:44:33.788988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.582 qpair failed and we were unable to recover it. 00:34:49.582 [2024-07-14 09:44:33.789154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.582 [2024-07-14 09:44:33.789187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.582 qpair failed and we were unable to recover it. 00:34:49.582 [2024-07-14 09:44:33.789351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.582 [2024-07-14 09:44:33.789377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.582 qpair failed and we were unable to recover it. 00:34:49.582 [2024-07-14 09:44:33.789556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.582 [2024-07-14 09:44:33.789582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.582 qpair failed and we were unable to recover it. 00:34:49.582 [2024-07-14 09:44:33.789772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.582 [2024-07-14 09:44:33.789797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.582 qpair failed and we were unable to recover it. 00:34:49.582 [2024-07-14 09:44:33.789966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.582 [2024-07-14 09:44:33.789992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.582 qpair failed and we were unable to recover it. 00:34:49.582 [2024-07-14 09:44:33.790211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.582 [2024-07-14 09:44:33.790236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.582 qpair failed and we were unable to recover it. 00:34:49.582 [2024-07-14 09:44:33.790418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.582 [2024-07-14 09:44:33.790444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.582 qpair failed and we were unable to recover it. 00:34:49.582 [2024-07-14 09:44:33.790593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.582 [2024-07-14 09:44:33.790618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.582 qpair failed and we were unable to recover it. 00:34:49.582 [2024-07-14 09:44:33.790777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.582 [2024-07-14 09:44:33.790802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.582 qpair failed and we were unable to recover it. 00:34:49.582 [2024-07-14 09:44:33.790996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.582 [2024-07-14 09:44:33.791023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.582 qpair failed and we were unable to recover it. 00:34:49.582 [2024-07-14 09:44:33.791177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.582 [2024-07-14 09:44:33.791203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.582 qpair failed and we were unable to recover it. 00:34:49.582 [2024-07-14 09:44:33.791355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.582 [2024-07-14 09:44:33.791381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.582 qpair failed and we were unable to recover it. 00:34:49.582 [2024-07-14 09:44:33.791594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.582 [2024-07-14 09:44:33.791619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.582 qpair failed and we were unable to recover it. 00:34:49.582 [2024-07-14 09:44:33.791807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.582 [2024-07-14 09:44:33.791833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.582 qpair failed and we were unable to recover it. 00:34:49.582 [2024-07-14 09:44:33.792000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.582 [2024-07-14 09:44:33.792026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.582 qpair failed and we were unable to recover it. 00:34:49.582 [2024-07-14 09:44:33.792181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.582 [2024-07-14 09:44:33.792206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.582 qpair failed and we were unable to recover it. 00:34:49.582 [2024-07-14 09:44:33.792371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.582 [2024-07-14 09:44:33.792397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.582 qpair failed and we were unable to recover it. 00:34:49.583 [2024-07-14 09:44:33.792579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.583 [2024-07-14 09:44:33.792604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.583 qpair failed and we were unable to recover it. 00:34:49.583 [2024-07-14 09:44:33.792766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.583 [2024-07-14 09:44:33.792791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.583 qpair failed and we were unable to recover it. 00:34:49.583 [2024-07-14 09:44:33.792951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.583 [2024-07-14 09:44:33.792977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.583 qpair failed and we were unable to recover it. 00:34:49.583 [2024-07-14 09:44:33.793173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.583 [2024-07-14 09:44:33.793199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.583 qpair failed and we were unable to recover it. 00:34:49.583 [2024-07-14 09:44:33.793361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.583 [2024-07-14 09:44:33.793386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.583 qpair failed and we were unable to recover it. 00:34:49.583 [2024-07-14 09:44:33.793578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.583 [2024-07-14 09:44:33.793604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.583 qpair failed and we were unable to recover it. 00:34:49.583 [2024-07-14 09:44:33.793760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.583 [2024-07-14 09:44:33.793785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.583 qpair failed and we were unable to recover it. 00:34:49.583 [2024-07-14 09:44:33.793963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.583 [2024-07-14 09:44:33.793995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.583 qpair failed and we were unable to recover it. 00:34:49.583 [2024-07-14 09:44:33.794154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.583 [2024-07-14 09:44:33.794180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.583 qpair failed and we were unable to recover it. 00:34:49.583 09:44:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.583 [2024-07-14 09:44:33.794394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.583 09:44:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:49.583 [2024-07-14 09:44:33.794420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.583 qpair failed and we were unable to recover it. 00:34:49.583 09:44:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.583 [2024-07-14 09:44:33.794610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.583 [2024-07-14 09:44:33.794636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.583 qpair failed and we were unable to recover it. 00:34:49.583 09:44:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:49.583 [2024-07-14 09:44:33.794803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.583 [2024-07-14 09:44:33.794829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.583 qpair failed and we were unable to recover it. 00:34:49.583 [2024-07-14 09:44:33.795026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.583 [2024-07-14 09:44:33.795052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.583 qpair failed and we were unable to recover it. 00:34:49.583 [2024-07-14 09:44:33.795239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.583 [2024-07-14 09:44:33.795265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.583 qpair failed and we were unable to recover it. 00:34:49.583 [2024-07-14 09:44:33.795453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.583 [2024-07-14 09:44:33.795479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.583 qpair failed and we were unable to recover it. 00:34:49.583 [2024-07-14 09:44:33.795647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.583 [2024-07-14 09:44:33.795673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.583 qpair failed and we were unable to recover it. 00:34:49.583 [2024-07-14 09:44:33.795833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.583 [2024-07-14 09:44:33.795859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.583 qpair failed and we were unable to recover it. 00:34:49.583 [2024-07-14 09:44:33.796069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.583 [2024-07-14 09:44:33.796095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.583 qpair failed and we were unable to recover it. 00:34:49.583 [2024-07-14 09:44:33.796249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.583 [2024-07-14 09:44:33.796275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.583 qpair failed and we were unable to recover it. 00:34:49.583 [2024-07-14 09:44:33.796438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.583 [2024-07-14 09:44:33.796463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.583 qpair failed and we were unable to recover it. 00:34:49.583 [2024-07-14 09:44:33.796648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.583 [2024-07-14 09:44:33.796674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.583 qpair failed and we were unable to recover it. 00:34:49.583 [2024-07-14 09:44:33.796853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.583 [2024-07-14 09:44:33.796885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.583 qpair failed and we were unable to recover it. 00:34:49.583 [2024-07-14 09:44:33.797108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.583 [2024-07-14 09:44:33.797133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.583 qpair failed and we were unable to recover it. 00:34:49.583 [2024-07-14 09:44:33.797320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.583 [2024-07-14 09:44:33.797346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.583 qpair failed and we were unable to recover it. 00:34:49.583 [2024-07-14 09:44:33.797509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.583 [2024-07-14 09:44:33.797535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.583 qpair failed and we were unable to recover it. 00:34:49.583 [2024-07-14 09:44:33.797701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.583 [2024-07-14 09:44:33.797726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.583 qpair failed and we were unable to recover it. 00:34:49.583 [2024-07-14 09:44:33.797914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.583 [2024-07-14 09:44:33.797940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.583 qpair failed and we were unable to recover it. 00:34:49.583 [2024-07-14 09:44:33.798130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.583 [2024-07-14 09:44:33.798155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.583 qpair failed and we were unable to recover it. 00:34:49.583 [2024-07-14 09:44:33.798311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.583 [2024-07-14 09:44:33.798336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.583 qpair failed and we were unable to recover it. 00:34:49.583 [2024-07-14 09:44:33.798511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.583 [2024-07-14 09:44:33.798537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.583 qpair failed and we were unable to recover it. 00:34:49.583 [2024-07-14 09:44:33.798731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.583 [2024-07-14 09:44:33.798756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.583 qpair failed and we were unable to recover it. 00:34:49.583 [2024-07-14 09:44:33.798949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.583 [2024-07-14 09:44:33.798977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.583 qpair failed and we were unable to recover it. 00:34:49.583 [2024-07-14 09:44:33.799163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.583 [2024-07-14 09:44:33.799189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.583 qpair failed and we were unable to recover it. 00:34:49.583 [2024-07-14 09:44:33.799374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.583 [2024-07-14 09:44:33.799399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.583 qpair failed and we were unable to recover it. 00:34:49.583 [2024-07-14 09:44:33.799589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.583 [2024-07-14 09:44:33.799614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.583 qpair failed and we were unable to recover it. 00:34:49.583 [2024-07-14 09:44:33.799807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.583 [2024-07-14 09:44:33.799833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.583 qpair failed and we were unable to recover it. 00:34:49.583 [2024-07-14 09:44:33.800033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.583 [2024-07-14 09:44:33.800059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.583 qpair failed and we were unable to recover it. 00:34:49.583 [2024-07-14 09:44:33.800238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.583 [2024-07-14 09:44:33.800263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.583 qpair failed and we were unable to recover it. 00:34:49.583 [2024-07-14 09:44:33.800420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.583 [2024-07-14 09:44:33.800445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.583 qpair failed and we were unable to recover it. 00:34:49.583 [2024-07-14 09:44:33.800606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.584 [2024-07-14 09:44:33.800632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.584 qpair failed and we were unable to recover it. 00:34:49.584 [2024-07-14 09:44:33.800821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.584 [2024-07-14 09:44:33.800846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.584 qpair failed and we were unable to recover it. 00:34:49.584 [2024-07-14 09:44:33.801041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.584 [2024-07-14 09:44:33.801067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.584 qpair failed and we were unable to recover it. 00:34:49.584 [2024-07-14 09:44:33.801225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.584 [2024-07-14 09:44:33.801251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.584 qpair failed and we were unable to recover it. 00:34:49.584 [2024-07-14 09:44:33.801440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.584 [2024-07-14 09:44:33.801466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.584 qpair failed and we were unable to recover it. 00:34:49.584 [2024-07-14 09:44:33.801623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.584 [2024-07-14 09:44:33.801650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.584 qpair failed and we were unable to recover it. 00:34:49.584 [2024-07-14 09:44:33.801804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.584 [2024-07-14 09:44:33.801829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.584 qpair failed and we were unable to recover it. 00:34:49.584 [2024-07-14 09:44:33.802040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.584 [2024-07-14 09:44:33.802066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.584 qpair failed and we were unable to recover it. 00:34:49.584 [2024-07-14 09:44:33.802224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.584 [2024-07-14 09:44:33.802250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.584 qpair failed and we were unable to recover it. 00:34:49.584 09:44:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.584 [2024-07-14 09:44:33.802406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.584 [2024-07-14 09:44:33.802431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.584 qpair failed and we were unable to recover it. 00:34:49.584 09:44:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:49.584 09:44:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.584 [2024-07-14 09:44:33.802619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.584 [2024-07-14 09:44:33.802645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.584 qpair failed and we were unable to recover it. 00:34:49.584 09:44:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:49.584 [2024-07-14 09:44:33.802833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.584 [2024-07-14 09:44:33.802859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.584 qpair failed and we were unable to recover it. 00:34:49.584 [2024-07-14 09:44:33.803038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.584 [2024-07-14 09:44:33.803063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.584 qpair failed and we were unable to recover it. 00:34:49.584 [2024-07-14 09:44:33.803220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.584 [2024-07-14 09:44:33.803245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.584 qpair failed and we were unable to recover it. 00:34:49.584 [2024-07-14 09:44:33.803415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.584 [2024-07-14 09:44:33.803442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.584 qpair failed and we were unable to recover it. 00:34:49.584 [2024-07-14 09:44:33.803625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.584 [2024-07-14 09:44:33.803651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.584 qpair failed and we were unable to recover it. 00:34:49.584 [2024-07-14 09:44:33.803818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.584 [2024-07-14 09:44:33.803843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.584 qpair failed and we were unable to recover it. 00:34:49.584 [2024-07-14 09:44:33.804015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.584 [2024-07-14 09:44:33.804041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.584 qpair failed and we were unable to recover it. 00:34:49.584 [2024-07-14 09:44:33.804194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.584 [2024-07-14 09:44:33.804220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.584 qpair failed and we were unable to recover it. 00:34:49.584 [2024-07-14 09:44:33.804409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.584 [2024-07-14 09:44:33.804436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.584 qpair failed and we were unable to recover it. 00:34:49.584 [2024-07-14 09:44:33.804620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.584 [2024-07-14 09:44:33.804646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.584 qpair failed and we were unable to recover it. 00:34:49.584 [2024-07-14 09:44:33.804801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.584 [2024-07-14 09:44:33.804827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.584 qpair failed and we were unable to recover it. 00:34:49.584 [2024-07-14 09:44:33.805000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.584 [2024-07-14 09:44:33.805027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.584 qpair failed and we were unable to recover it. 00:34:49.584 [2024-07-14 09:44:33.805182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.584 [2024-07-14 09:44:33.805208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.584 qpair failed and we were unable to recover it. 00:34:49.584 [2024-07-14 09:44:33.805363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.584 [2024-07-14 09:44:33.805389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.584 qpair failed and we were unable to recover it. 00:34:49.584 [2024-07-14 09:44:33.805573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.584 [2024-07-14 09:44:33.805598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.584 qpair failed and we were unable to recover it. 00:34:49.584 [2024-07-14 09:44:33.805763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.584 [2024-07-14 09:44:33.805790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.584 qpair failed and we were unable to recover it. 00:34:49.584 [2024-07-14 09:44:33.805968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.584 [2024-07-14 09:44:33.805994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.584 qpair failed and we were unable to recover it. 00:34:49.584 [2024-07-14 09:44:33.806205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:49.584 [2024-07-14 09:44:33.806230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1668000b90 with addr=10.0.0.2, port=4420 00:34:49.584 qpair failed and we were unable to recover it. 00:34:49.584 [2024-07-14 09:44:33.806254] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:49.584 [2024-07-14 09:44:33.808838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.584 [2024-07-14 09:44:33.809040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.584 [2024-07-14 09:44:33.809068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.584 [2024-07-14 09:44:33.809083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.584 [2024-07-14 09:44:33.809096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:49.584 [2024-07-14 09:44:33.809132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:49.584 qpair failed and we were unable to recover it. 00:34:49.584 09:44:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.584 09:44:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:49.584 09:44:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.584 09:44:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:49.584 09:44:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.584 09:44:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 898189 00:34:49.584 [2024-07-14 09:44:33.818705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.584 [2024-07-14 09:44:33.818905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.584 [2024-07-14 09:44:33.818936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.584 [2024-07-14 09:44:33.818951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.584 [2024-07-14 09:44:33.818963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:49.584 [2024-07-14 09:44:33.818994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:49.584 qpair failed and we were unable to recover it. 00:34:49.584 [2024-07-14 09:44:33.828859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.584 [2024-07-14 09:44:33.829021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.584 [2024-07-14 09:44:33.829048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.584 [2024-07-14 09:44:33.829063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.584 [2024-07-14 09:44:33.829075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:49.585 [2024-07-14 09:44:33.829105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:49.585 qpair failed and we were unable to recover it. 00:34:49.585 [2024-07-14 09:44:33.838751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.585 [2024-07-14 09:44:33.838928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.585 [2024-07-14 09:44:33.838956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.585 [2024-07-14 09:44:33.838970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.585 [2024-07-14 09:44:33.838982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:49.585 [2024-07-14 09:44:33.839012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:49.585 qpair failed and we were unable to recover it. 00:34:49.585 [2024-07-14 09:44:33.848792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.585 [2024-07-14 09:44:33.848986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.585 [2024-07-14 09:44:33.849014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.585 [2024-07-14 09:44:33.849028] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.585 [2024-07-14 09:44:33.849044] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:49.585 [2024-07-14 09:44:33.849073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:49.585 qpair failed and we were unable to recover it. 00:34:49.585 [2024-07-14 09:44:33.858754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.585 [2024-07-14 09:44:33.858919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.585 [2024-07-14 09:44:33.858951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.585 [2024-07-14 09:44:33.858967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.585 [2024-07-14 09:44:33.858979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:49.585 [2024-07-14 09:44:33.859009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:49.585 qpair failed and we were unable to recover it. 00:34:49.585 [2024-07-14 09:44:33.868786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.585 [2024-07-14 09:44:33.868979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.585 [2024-07-14 09:44:33.869007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.585 [2024-07-14 09:44:33.869021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.585 [2024-07-14 09:44:33.869033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:49.585 [2024-07-14 09:44:33.869063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:49.585 qpair failed and we were unable to recover it. 00:34:49.585 [2024-07-14 09:44:33.878786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.585 [2024-07-14 09:44:33.878960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.585 [2024-07-14 09:44:33.878986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.585 [2024-07-14 09:44:33.879001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.585 [2024-07-14 09:44:33.879014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:49.585 [2024-07-14 09:44:33.879044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:49.585 qpair failed and we were unable to recover it. 00:34:49.585 [2024-07-14 09:44:33.888838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.585 [2024-07-14 09:44:33.889050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.585 [2024-07-14 09:44:33.889077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.585 [2024-07-14 09:44:33.889091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.585 [2024-07-14 09:44:33.889103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:49.585 [2024-07-14 09:44:33.889133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:49.585 qpair failed and we were unable to recover it. 00:34:49.585 [2024-07-14 09:44:33.898805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.585 [2024-07-14 09:44:33.898967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.585 [2024-07-14 09:44:33.898994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.585 [2024-07-14 09:44:33.899008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.585 [2024-07-14 09:44:33.899026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:49.585 [2024-07-14 09:44:33.899057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:49.585 qpair failed and we were unable to recover it. 00:34:49.585 [2024-07-14 09:44:33.908836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.585 [2024-07-14 09:44:33.908996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.585 [2024-07-14 09:44:33.909022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.585 [2024-07-14 09:44:33.909037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.585 [2024-07-14 09:44:33.909049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:49.585 [2024-07-14 09:44:33.909079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:49.585 qpair failed and we were unable to recover it. 00:34:49.585 [2024-07-14 09:44:33.918971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.585 [2024-07-14 09:44:33.919143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.585 [2024-07-14 09:44:33.919182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.585 [2024-07-14 09:44:33.919212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.585 [2024-07-14 09:44:33.919224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:49.585 [2024-07-14 09:44:33.919270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:49.585 qpair failed and we were unable to recover it. 00:34:49.585 [2024-07-14 09:44:33.928942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.585 [2024-07-14 09:44:33.929112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.585 [2024-07-14 09:44:33.929140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.585 [2024-07-14 09:44:33.929154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.585 [2024-07-14 09:44:33.929167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:49.585 [2024-07-14 09:44:33.929210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:49.585 qpair failed and we were unable to recover it. 00:34:49.585 [2024-07-14 09:44:33.938973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.585 [2024-07-14 09:44:33.939136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.585 [2024-07-14 09:44:33.939162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.585 [2024-07-14 09:44:33.939177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.585 [2024-07-14 09:44:33.939189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:49.585 [2024-07-14 09:44:33.939218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:49.585 qpair failed and we were unable to recover it. 00:34:49.585 [2024-07-14 09:44:33.949009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.585 [2024-07-14 09:44:33.949181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.585 [2024-07-14 09:44:33.949208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.585 [2024-07-14 09:44:33.949224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.585 [2024-07-14 09:44:33.949236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:49.585 [2024-07-14 09:44:33.949265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:49.585 qpair failed and we were unable to recover it. 00:34:49.585 [2024-07-14 09:44:33.959011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.585 [2024-07-14 09:44:33.959179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.585 [2024-07-14 09:44:33.959206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.585 [2024-07-14 09:44:33.959220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.586 [2024-07-14 09:44:33.959247] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:49.586 [2024-07-14 09:44:33.959276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:49.586 qpair failed and we were unable to recover it. 00:34:49.586 [2024-07-14 09:44:33.969066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.586 [2024-07-14 09:44:33.969238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.586 [2024-07-14 09:44:33.969265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.586 [2024-07-14 09:44:33.969294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.586 [2024-07-14 09:44:33.969307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:49.586 [2024-07-14 09:44:33.969336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:49.586 qpair failed and we were unable to recover it. 00:34:49.586 [2024-07-14 09:44:33.979063] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.586 [2024-07-14 09:44:33.979223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.586 [2024-07-14 09:44:33.979250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.586 [2024-07-14 09:44:33.979264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.586 [2024-07-14 09:44:33.979276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:49.586 [2024-07-14 09:44:33.979306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:49.586 qpair failed and we were unable to recover it. 00:34:49.586 [2024-07-14 09:44:33.989131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.586 [2024-07-14 09:44:33.989305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.586 [2024-07-14 09:44:33.989332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.586 [2024-07-14 09:44:33.989352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.586 [2024-07-14 09:44:33.989379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:49.586 [2024-07-14 09:44:33.989420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:49.586 qpair failed and we were unable to recover it. 00:34:49.586 [2024-07-14 09:44:33.999239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.586 [2024-07-14 09:44:33.999416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.586 [2024-07-14 09:44:33.999442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.586 [2024-07-14 09:44:33.999456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.586 [2024-07-14 09:44:33.999468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:49.586 [2024-07-14 09:44:33.999511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:49.586 qpair failed and we were unable to recover it. 00:34:49.586 [2024-07-14 09:44:34.009161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.586 [2024-07-14 09:44:34.009327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.586 [2024-07-14 09:44:34.009354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.586 [2024-07-14 09:44:34.009369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.586 [2024-07-14 09:44:34.009381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:49.586 [2024-07-14 09:44:34.009410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:49.586 qpair failed and we were unable to recover it. 00:34:49.845 [2024-07-14 09:44:34.019183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.845 [2024-07-14 09:44:34.019346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.845 [2024-07-14 09:44:34.019373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.845 [2024-07-14 09:44:34.019387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.845 [2024-07-14 09:44:34.019399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:49.845 [2024-07-14 09:44:34.019429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:49.845 qpair failed and we were unable to recover it. 00:34:49.845 [2024-07-14 09:44:34.029264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.845 [2024-07-14 09:44:34.029432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.845 [2024-07-14 09:44:34.029468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.845 [2024-07-14 09:44:34.029497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.845 [2024-07-14 09:44:34.029509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:49.845 [2024-07-14 09:44:34.029553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:49.845 qpair failed and we were unable to recover it. 00:34:49.845 [2024-07-14 09:44:34.039270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.845 [2024-07-14 09:44:34.039436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.845 [2024-07-14 09:44:34.039463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.845 [2024-07-14 09:44:34.039477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.845 [2024-07-14 09:44:34.039489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:49.845 [2024-07-14 09:44:34.039521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:49.845 qpair failed and we were unable to recover it. 00:34:49.845 [2024-07-14 09:44:34.049294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.845 [2024-07-14 09:44:34.049461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.845 [2024-07-14 09:44:34.049488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.845 [2024-07-14 09:44:34.049503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.845 [2024-07-14 09:44:34.049515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:49.845 [2024-07-14 09:44:34.049544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:49.845 qpair failed and we were unable to recover it. 00:34:49.845 [2024-07-14 09:44:34.059301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.845 [2024-07-14 09:44:34.059465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.845 [2024-07-14 09:44:34.059491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.845 [2024-07-14 09:44:34.059505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.845 [2024-07-14 09:44:34.059518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:49.845 [2024-07-14 09:44:34.059547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:49.845 qpair failed and we were unable to recover it. 00:34:49.845 [2024-07-14 09:44:34.069360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.845 [2024-07-14 09:44:34.069510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.845 [2024-07-14 09:44:34.069536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.845 [2024-07-14 09:44:34.069550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.845 [2024-07-14 09:44:34.069563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:49.845 [2024-07-14 09:44:34.069592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:49.845 qpair failed and we were unable to recover it. 00:34:49.845 [2024-07-14 09:44:34.079346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.845 [2024-07-14 09:44:34.079507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.845 [2024-07-14 09:44:34.079533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.845 [2024-07-14 09:44:34.079553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.845 [2024-07-14 09:44:34.079567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:49.845 [2024-07-14 09:44:34.079596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:49.845 qpair failed and we were unable to recover it. 00:34:49.845 [2024-07-14 09:44:34.089444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.845 [2024-07-14 09:44:34.089618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.845 [2024-07-14 09:44:34.089643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.845 [2024-07-14 09:44:34.089658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.845 [2024-07-14 09:44:34.089685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:49.845 [2024-07-14 09:44:34.089714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:49.845 qpair failed and we were unable to recover it. 00:34:49.845 [2024-07-14 09:44:34.099421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.845 [2024-07-14 09:44:34.099582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.845 [2024-07-14 09:44:34.099607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.845 [2024-07-14 09:44:34.099622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.845 [2024-07-14 09:44:34.099634] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:49.845 [2024-07-14 09:44:34.099664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:49.845 qpair failed and we were unable to recover it. 00:34:49.845 [2024-07-14 09:44:34.109452] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.845 [2024-07-14 09:44:34.109604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.845 [2024-07-14 09:44:34.109631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.845 [2024-07-14 09:44:34.109645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.845 [2024-07-14 09:44:34.109657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:49.845 [2024-07-14 09:44:34.109686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:49.845 qpair failed and we were unable to recover it. 00:34:49.845 [2024-07-14 09:44:34.119464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.845 [2024-07-14 09:44:34.119626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.845 [2024-07-14 09:44:34.119651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.845 [2024-07-14 09:44:34.119666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.845 [2024-07-14 09:44:34.119678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:49.845 [2024-07-14 09:44:34.119710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:49.845 qpair failed and we were unable to recover it. 00:34:49.845 [2024-07-14 09:44:34.129492] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.845 [2024-07-14 09:44:34.129660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.845 [2024-07-14 09:44:34.129686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.845 [2024-07-14 09:44:34.129700] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.845 [2024-07-14 09:44:34.129713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:49.845 [2024-07-14 09:44:34.129742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:49.845 qpair failed and we were unable to recover it. 00:34:49.845 [2024-07-14 09:44:34.139544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.845 [2024-07-14 09:44:34.139706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.846 [2024-07-14 09:44:34.139732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.846 [2024-07-14 09:44:34.139746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.846 [2024-07-14 09:44:34.139758] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:49.846 [2024-07-14 09:44:34.139806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:49.846 qpair failed and we were unable to recover it. 00:34:49.846 [2024-07-14 09:44:34.149587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.846 [2024-07-14 09:44:34.149750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.846 [2024-07-14 09:44:34.149776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.846 [2024-07-14 09:44:34.149790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.846 [2024-07-14 09:44:34.149802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:49.846 [2024-07-14 09:44:34.149846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:49.846 qpair failed and we were unable to recover it. 00:34:49.846 [2024-07-14 09:44:34.159585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.846 [2024-07-14 09:44:34.159794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.846 [2024-07-14 09:44:34.159819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.846 [2024-07-14 09:44:34.159834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.846 [2024-07-14 09:44:34.159846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:49.846 [2024-07-14 09:44:34.159884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:49.846 qpair failed and we were unable to recover it. 00:34:49.846 [2024-07-14 09:44:34.169608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.846 [2024-07-14 09:44:34.169779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.846 [2024-07-14 09:44:34.169810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.846 [2024-07-14 09:44:34.169825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.846 [2024-07-14 09:44:34.169851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:49.846 [2024-07-14 09:44:34.169888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:49.846 qpair failed and we were unable to recover it. 00:34:49.846 [2024-07-14 09:44:34.179623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.846 [2024-07-14 09:44:34.179806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.846 [2024-07-14 09:44:34.179833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.846 [2024-07-14 09:44:34.179847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.846 [2024-07-14 09:44:34.179859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:49.846 [2024-07-14 09:44:34.179900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:49.846 qpair failed and we were unable to recover it. 00:34:49.846 [2024-07-14 09:44:34.189689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.846 [2024-07-14 09:44:34.189852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.846 [2024-07-14 09:44:34.189884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.846 [2024-07-14 09:44:34.189900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.846 [2024-07-14 09:44:34.189912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:49.846 [2024-07-14 09:44:34.189941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:49.846 qpair failed and we were unable to recover it. 00:34:49.846 [2024-07-14 09:44:34.199731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.846 [2024-07-14 09:44:34.199905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.846 [2024-07-14 09:44:34.199939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.846 [2024-07-14 09:44:34.199954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.846 [2024-07-14 09:44:34.199966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:49.846 [2024-07-14 09:44:34.199996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:49.846 qpair failed and we were unable to recover it. 00:34:49.846 [2024-07-14 09:44:34.209736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.846 [2024-07-14 09:44:34.209916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.846 [2024-07-14 09:44:34.209942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.846 [2024-07-14 09:44:34.209957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.846 [2024-07-14 09:44:34.209969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:49.846 [2024-07-14 09:44:34.210004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:49.846 qpair failed and we were unable to recover it. 00:34:49.846 [2024-07-14 09:44:34.219755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.846 [2024-07-14 09:44:34.219963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.846 [2024-07-14 09:44:34.219989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.846 [2024-07-14 09:44:34.220004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.846 [2024-07-14 09:44:34.220016] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:49.846 [2024-07-14 09:44:34.220046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:49.846 qpair failed and we were unable to recover it. 00:34:49.846 [2024-07-14 09:44:34.229805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.846 [2024-07-14 09:44:34.229971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.846 [2024-07-14 09:44:34.229996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.846 [2024-07-14 09:44:34.230011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.846 [2024-07-14 09:44:34.230024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:49.846 [2024-07-14 09:44:34.230053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:49.846 qpair failed and we were unable to recover it. 00:34:49.846 [2024-07-14 09:44:34.239816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.846 [2024-07-14 09:44:34.240000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.846 [2024-07-14 09:44:34.240026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.846 [2024-07-14 09:44:34.240040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.846 [2024-07-14 09:44:34.240052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:49.846 [2024-07-14 09:44:34.240081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:49.846 qpair failed and we were unable to recover it. 00:34:49.846 [2024-07-14 09:44:34.249843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.846 [2024-07-14 09:44:34.250028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.846 [2024-07-14 09:44:34.250054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.846 [2024-07-14 09:44:34.250068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.846 [2024-07-14 09:44:34.250080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:49.846 [2024-07-14 09:44:34.250109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:49.846 qpair failed and we were unable to recover it. 00:34:49.846 [2024-07-14 09:44:34.259852] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.846 [2024-07-14 09:44:34.260024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.846 [2024-07-14 09:44:34.260055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.846 [2024-07-14 09:44:34.260070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.846 [2024-07-14 09:44:34.260082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:49.846 [2024-07-14 09:44:34.260114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:49.846 qpair failed and we were unable to recover it. 00:34:49.846 [2024-07-14 09:44:34.269919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.846 [2024-07-14 09:44:34.270079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.846 [2024-07-14 09:44:34.270107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.846 [2024-07-14 09:44:34.270128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.846 [2024-07-14 09:44:34.270140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:49.846 [2024-07-14 09:44:34.270186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:49.846 qpair failed and we were unable to recover it. 00:34:49.846 [2024-07-14 09:44:34.279929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.846 [2024-07-14 09:44:34.280089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.846 [2024-07-14 09:44:34.280115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.846 [2024-07-14 09:44:34.280130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.847 [2024-07-14 09:44:34.280142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:49.847 [2024-07-14 09:44:34.280172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:49.847 qpair failed and we were unable to recover it. 00:34:49.847 [2024-07-14 09:44:34.289939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.847 [2024-07-14 09:44:34.290105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.847 [2024-07-14 09:44:34.290130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.847 [2024-07-14 09:44:34.290145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.847 [2024-07-14 09:44:34.290157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:49.847 [2024-07-14 09:44:34.290186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:49.847 qpair failed and we were unable to recover it. 00:34:50.105 [2024-07-14 09:44:34.300026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.106 [2024-07-14 09:44:34.300207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.106 [2024-07-14 09:44:34.300233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.106 [2024-07-14 09:44:34.300266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.106 [2024-07-14 09:44:34.300285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.106 [2024-07-14 09:44:34.300330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.106 qpair failed and we were unable to recover it. 00:34:50.106 [2024-07-14 09:44:34.310006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.106 [2024-07-14 09:44:34.310167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.106 [2024-07-14 09:44:34.310193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.106 [2024-07-14 09:44:34.310207] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.106 [2024-07-14 09:44:34.310220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.106 [2024-07-14 09:44:34.310249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.106 qpair failed and we were unable to recover it. 00:34:50.106 [2024-07-14 09:44:34.320046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.106 [2024-07-14 09:44:34.320218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.106 [2024-07-14 09:44:34.320244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.106 [2024-07-14 09:44:34.320259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.106 [2024-07-14 09:44:34.320271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.106 [2024-07-14 09:44:34.320300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.106 qpair failed and we were unable to recover it. 00:34:50.106 [2024-07-14 09:44:34.330086] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.106 [2024-07-14 09:44:34.330254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.106 [2024-07-14 09:44:34.330280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.106 [2024-07-14 09:44:34.330294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.106 [2024-07-14 09:44:34.330306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.106 [2024-07-14 09:44:34.330336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.106 qpair failed and we were unable to recover it. 00:34:50.106 [2024-07-14 09:44:34.340103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.106 [2024-07-14 09:44:34.340266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.106 [2024-07-14 09:44:34.340294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.106 [2024-07-14 09:44:34.340309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.106 [2024-07-14 09:44:34.340341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.106 [2024-07-14 09:44:34.340371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.106 qpair failed and we were unable to recover it. 00:34:50.106 [2024-07-14 09:44:34.350137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.106 [2024-07-14 09:44:34.350307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.106 [2024-07-14 09:44:34.350335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.106 [2024-07-14 09:44:34.350350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.106 [2024-07-14 09:44:34.350362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.106 [2024-07-14 09:44:34.350391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.106 qpair failed and we were unable to recover it. 00:34:50.106 [2024-07-14 09:44:34.360169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.106 [2024-07-14 09:44:34.360341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.106 [2024-07-14 09:44:34.360367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.106 [2024-07-14 09:44:34.360397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.106 [2024-07-14 09:44:34.360409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.106 [2024-07-14 09:44:34.360441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.106 qpair failed and we were unable to recover it. 00:34:50.106 [2024-07-14 09:44:34.370158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.106 [2024-07-14 09:44:34.370321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.106 [2024-07-14 09:44:34.370347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.106 [2024-07-14 09:44:34.370362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.106 [2024-07-14 09:44:34.370374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.106 [2024-07-14 09:44:34.370403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.106 qpair failed and we were unable to recover it. 00:34:50.106 [2024-07-14 09:44:34.380211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.106 [2024-07-14 09:44:34.380367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.106 [2024-07-14 09:44:34.380393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.106 [2024-07-14 09:44:34.380407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.106 [2024-07-14 09:44:34.380419] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.106 [2024-07-14 09:44:34.380453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.106 qpair failed and we were unable to recover it. 00:34:50.106 [2024-07-14 09:44:34.390203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.106 [2024-07-14 09:44:34.390364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.106 [2024-07-14 09:44:34.390389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.106 [2024-07-14 09:44:34.390404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.106 [2024-07-14 09:44:34.390423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.106 [2024-07-14 09:44:34.390454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.106 qpair failed and we were unable to recover it. 00:34:50.106 [2024-07-14 09:44:34.400241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.106 [2024-07-14 09:44:34.400419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.106 [2024-07-14 09:44:34.400445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.106 [2024-07-14 09:44:34.400459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.106 [2024-07-14 09:44:34.400471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.106 [2024-07-14 09:44:34.400501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.106 qpair failed and we were unable to recover it. 00:34:50.106 [2024-07-14 09:44:34.410265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.106 [2024-07-14 09:44:34.410433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.106 [2024-07-14 09:44:34.410458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.106 [2024-07-14 09:44:34.410472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.106 [2024-07-14 09:44:34.410484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.106 [2024-07-14 09:44:34.410513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.106 qpair failed and we were unable to recover it. 00:34:50.106 [2024-07-14 09:44:34.420301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.106 [2024-07-14 09:44:34.420462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.106 [2024-07-14 09:44:34.420486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.106 [2024-07-14 09:44:34.420500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.106 [2024-07-14 09:44:34.420512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.106 [2024-07-14 09:44:34.420541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.106 qpair failed and we were unable to recover it. 00:34:50.106 [2024-07-14 09:44:34.430316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.106 [2024-07-14 09:44:34.430475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.106 [2024-07-14 09:44:34.430501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.106 [2024-07-14 09:44:34.430515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.106 [2024-07-14 09:44:34.430527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.106 [2024-07-14 09:44:34.430556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.106 qpair failed and we were unable to recover it. 00:34:50.106 [2024-07-14 09:44:34.440377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.106 [2024-07-14 09:44:34.440544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.106 [2024-07-14 09:44:34.440569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.107 [2024-07-14 09:44:34.440583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.107 [2024-07-14 09:44:34.440595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.107 [2024-07-14 09:44:34.440624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.107 qpair failed and we were unable to recover it. 00:34:50.107 [2024-07-14 09:44:34.450385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.107 [2024-07-14 09:44:34.450551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.107 [2024-07-14 09:44:34.450577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.107 [2024-07-14 09:44:34.450591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.107 [2024-07-14 09:44:34.450603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.107 [2024-07-14 09:44:34.450632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.107 qpair failed and we were unable to recover it. 00:34:50.107 [2024-07-14 09:44:34.460428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.107 [2024-07-14 09:44:34.460598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.107 [2024-07-14 09:44:34.460624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.107 [2024-07-14 09:44:34.460638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.107 [2024-07-14 09:44:34.460650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.107 [2024-07-14 09:44:34.460679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.107 qpair failed and we were unable to recover it. 00:34:50.107 [2024-07-14 09:44:34.470481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.107 [2024-07-14 09:44:34.470665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.107 [2024-07-14 09:44:34.470691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.107 [2024-07-14 09:44:34.470705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.107 [2024-07-14 09:44:34.470732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.107 [2024-07-14 09:44:34.470761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.107 qpair failed and we were unable to recover it. 00:34:50.107 [2024-07-14 09:44:34.480518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.107 [2024-07-14 09:44:34.480686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.107 [2024-07-14 09:44:34.480725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.107 [2024-07-14 09:44:34.480744] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.107 [2024-07-14 09:44:34.480758] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.107 [2024-07-14 09:44:34.480802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.107 qpair failed and we were unable to recover it. 00:34:50.107 [2024-07-14 09:44:34.490499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.107 [2024-07-14 09:44:34.490667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.107 [2024-07-14 09:44:34.490692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.107 [2024-07-14 09:44:34.490706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.107 [2024-07-14 09:44:34.490718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.107 [2024-07-14 09:44:34.490747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.107 qpair failed and we were unable to recover it. 00:34:50.107 [2024-07-14 09:44:34.500521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.107 [2024-07-14 09:44:34.500682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.107 [2024-07-14 09:44:34.500707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.107 [2024-07-14 09:44:34.500721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.107 [2024-07-14 09:44:34.500733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.107 [2024-07-14 09:44:34.500762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.107 qpair failed and we were unable to recover it. 00:34:50.107 [2024-07-14 09:44:34.510554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.107 [2024-07-14 09:44:34.510716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.107 [2024-07-14 09:44:34.510741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.107 [2024-07-14 09:44:34.510756] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.107 [2024-07-14 09:44:34.510768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.107 [2024-07-14 09:44:34.510812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.107 qpair failed and we were unable to recover it. 00:34:50.107 [2024-07-14 09:44:34.520596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.107 [2024-07-14 09:44:34.520789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.107 [2024-07-14 09:44:34.520830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.107 [2024-07-14 09:44:34.520845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.107 [2024-07-14 09:44:34.520856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.107 [2024-07-14 09:44:34.520909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.107 qpair failed and we were unable to recover it. 00:34:50.107 [2024-07-14 09:44:34.530615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.107 [2024-07-14 09:44:34.530781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.107 [2024-07-14 09:44:34.530807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.107 [2024-07-14 09:44:34.530821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.107 [2024-07-14 09:44:34.530833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.107 [2024-07-14 09:44:34.530863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.107 qpair failed and we were unable to recover it. 00:34:50.107 [2024-07-14 09:44:34.540640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.107 [2024-07-14 09:44:34.540807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.107 [2024-07-14 09:44:34.540833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.107 [2024-07-14 09:44:34.540847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.107 [2024-07-14 09:44:34.540859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.107 [2024-07-14 09:44:34.540896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.107 qpair failed and we were unable to recover it. 00:34:50.107 [2024-07-14 09:44:34.550651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.107 [2024-07-14 09:44:34.550804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.107 [2024-07-14 09:44:34.550829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.107 [2024-07-14 09:44:34.550843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.107 [2024-07-14 09:44:34.550855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.107 [2024-07-14 09:44:34.550895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.107 qpair failed and we were unable to recover it. 00:34:50.366 [2024-07-14 09:44:34.560732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.366 [2024-07-14 09:44:34.560948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.366 [2024-07-14 09:44:34.560974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.366 [2024-07-14 09:44:34.560988] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.366 [2024-07-14 09:44:34.561000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.366 [2024-07-14 09:44:34.561029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.366 qpair failed and we were unable to recover it. 00:34:50.366 [2024-07-14 09:44:34.570769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.366 [2024-07-14 09:44:34.570947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.366 [2024-07-14 09:44:34.570979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.366 [2024-07-14 09:44:34.570994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.366 [2024-07-14 09:44:34.571006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.366 [2024-07-14 09:44:34.571036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.366 qpair failed and we were unable to recover it. 00:34:50.366 [2024-07-14 09:44:34.580767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.366 [2024-07-14 09:44:34.580942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.366 [2024-07-14 09:44:34.580967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.366 [2024-07-14 09:44:34.580981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.366 [2024-07-14 09:44:34.580993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.366 [2024-07-14 09:44:34.581023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.366 qpair failed and we were unable to recover it. 00:34:50.366 [2024-07-14 09:44:34.590793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.366 [2024-07-14 09:44:34.590971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.366 [2024-07-14 09:44:34.590996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.366 [2024-07-14 09:44:34.591011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.366 [2024-07-14 09:44:34.591023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.366 [2024-07-14 09:44:34.591053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.366 qpair failed and we were unable to recover it. 00:34:50.366 [2024-07-14 09:44:34.600926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.366 [2024-07-14 09:44:34.601107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.366 [2024-07-14 09:44:34.601132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.366 [2024-07-14 09:44:34.601147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.366 [2024-07-14 09:44:34.601159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.367 [2024-07-14 09:44:34.601204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.367 qpair failed and we were unable to recover it. 00:34:50.367 [2024-07-14 09:44:34.610904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.367 [2024-07-14 09:44:34.611074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.367 [2024-07-14 09:44:34.611099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.367 [2024-07-14 09:44:34.611113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.367 [2024-07-14 09:44:34.611125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.367 [2024-07-14 09:44:34.611160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.367 qpair failed and we were unable to recover it. 00:34:50.367 [2024-07-14 09:44:34.620938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.367 [2024-07-14 09:44:34.621098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.367 [2024-07-14 09:44:34.621123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.367 [2024-07-14 09:44:34.621138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.367 [2024-07-14 09:44:34.621150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.367 [2024-07-14 09:44:34.621184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.367 qpair failed and we were unable to recover it. 00:34:50.367 [2024-07-14 09:44:34.630948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.367 [2024-07-14 09:44:34.631103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.367 [2024-07-14 09:44:34.631129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.367 [2024-07-14 09:44:34.631143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.367 [2024-07-14 09:44:34.631155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.367 [2024-07-14 09:44:34.631185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.367 qpair failed and we were unable to recover it. 00:34:50.367 [2024-07-14 09:44:34.640973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.367 [2024-07-14 09:44:34.641151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.367 [2024-07-14 09:44:34.641176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.367 [2024-07-14 09:44:34.641190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.367 [2024-07-14 09:44:34.641202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.367 [2024-07-14 09:44:34.641231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.367 qpair failed and we were unable to recover it. 00:34:50.367 [2024-07-14 09:44:34.651005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.367 [2024-07-14 09:44:34.651178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.367 [2024-07-14 09:44:34.651203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.367 [2024-07-14 09:44:34.651217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.367 [2024-07-14 09:44:34.651229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.367 [2024-07-14 09:44:34.651258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.367 qpair failed and we were unable to recover it. 00:34:50.367 [2024-07-14 09:44:34.660980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.367 [2024-07-14 09:44:34.661148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.367 [2024-07-14 09:44:34.661179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.367 [2024-07-14 09:44:34.661194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.367 [2024-07-14 09:44:34.661206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.367 [2024-07-14 09:44:34.661238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.367 qpair failed and we were unable to recover it. 00:34:50.367 [2024-07-14 09:44:34.671042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.367 [2024-07-14 09:44:34.671222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.367 [2024-07-14 09:44:34.671247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.367 [2024-07-14 09:44:34.671261] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.367 [2024-07-14 09:44:34.671274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.367 [2024-07-14 09:44:34.671318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.367 qpair failed and we were unable to recover it. 00:34:50.367 [2024-07-14 09:44:34.681038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.367 [2024-07-14 09:44:34.681219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.367 [2024-07-14 09:44:34.681245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.367 [2024-07-14 09:44:34.681259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.367 [2024-07-14 09:44:34.681272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.367 [2024-07-14 09:44:34.681304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.367 qpair failed and we were unable to recover it. 00:34:50.367 [2024-07-14 09:44:34.691071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.367 [2024-07-14 09:44:34.691235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.367 [2024-07-14 09:44:34.691261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.367 [2024-07-14 09:44:34.691275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.367 [2024-07-14 09:44:34.691288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.367 [2024-07-14 09:44:34.691316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.367 qpair failed and we were unable to recover it. 00:34:50.367 [2024-07-14 09:44:34.701106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.367 [2024-07-14 09:44:34.701314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.367 [2024-07-14 09:44:34.701342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.367 [2024-07-14 09:44:34.701357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.367 [2024-07-14 09:44:34.701376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.367 [2024-07-14 09:44:34.701407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.367 qpair failed and we were unable to recover it. 00:34:50.367 [2024-07-14 09:44:34.711143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.367 [2024-07-14 09:44:34.711300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.367 [2024-07-14 09:44:34.711326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.367 [2024-07-14 09:44:34.711341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.367 [2024-07-14 09:44:34.711353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.367 [2024-07-14 09:44:34.711383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.367 qpair failed and we were unable to recover it. 00:34:50.367 [2024-07-14 09:44:34.721192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.367 [2024-07-14 09:44:34.721355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.367 [2024-07-14 09:44:34.721381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.367 [2024-07-14 09:44:34.721396] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.367 [2024-07-14 09:44:34.721423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.367 [2024-07-14 09:44:34.721455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.367 qpair failed and we were unable to recover it. 00:34:50.367 [2024-07-14 09:44:34.731206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.367 [2024-07-14 09:44:34.731372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.367 [2024-07-14 09:44:34.731398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.367 [2024-07-14 09:44:34.731413] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.367 [2024-07-14 09:44:34.731425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.367 [2024-07-14 09:44:34.731455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.367 qpair failed and we were unable to recover it. 00:34:50.367 [2024-07-14 09:44:34.741199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.367 [2024-07-14 09:44:34.741389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.367 [2024-07-14 09:44:34.741415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.367 [2024-07-14 09:44:34.741444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.367 [2024-07-14 09:44:34.741456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.367 [2024-07-14 09:44:34.741486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.367 qpair failed and we were unable to recover it. 00:34:50.367 [2024-07-14 09:44:34.751269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.368 [2024-07-14 09:44:34.751455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.368 [2024-07-14 09:44:34.751481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.368 [2024-07-14 09:44:34.751496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.368 [2024-07-14 09:44:34.751508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.368 [2024-07-14 09:44:34.751537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.368 qpair failed and we were unable to recover it. 00:34:50.368 [2024-07-14 09:44:34.761297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.368 [2024-07-14 09:44:34.761463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.368 [2024-07-14 09:44:34.761489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.368 [2024-07-14 09:44:34.761503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.368 [2024-07-14 09:44:34.761516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.368 [2024-07-14 09:44:34.761545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.368 qpair failed and we were unable to recover it. 00:34:50.368 [2024-07-14 09:44:34.771354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.368 [2024-07-14 09:44:34.771516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.368 [2024-07-14 09:44:34.771542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.368 [2024-07-14 09:44:34.771556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.368 [2024-07-14 09:44:34.771583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.368 [2024-07-14 09:44:34.771612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.368 qpair failed and we were unable to recover it. 00:34:50.368 [2024-07-14 09:44:34.781383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.368 [2024-07-14 09:44:34.781577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.368 [2024-07-14 09:44:34.781603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.368 [2024-07-14 09:44:34.781632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.368 [2024-07-14 09:44:34.781644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.368 [2024-07-14 09:44:34.781702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.368 qpair failed and we were unable to recover it. 00:34:50.368 [2024-07-14 09:44:34.791389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.368 [2024-07-14 09:44:34.791551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.368 [2024-07-14 09:44:34.791578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.368 [2024-07-14 09:44:34.791596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.368 [2024-07-14 09:44:34.791630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.368 [2024-07-14 09:44:34.791660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.368 qpair failed and we were unable to recover it. 00:34:50.368 [2024-07-14 09:44:34.801419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.368 [2024-07-14 09:44:34.801589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.368 [2024-07-14 09:44:34.801616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.368 [2024-07-14 09:44:34.801647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.368 [2024-07-14 09:44:34.801661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.368 [2024-07-14 09:44:34.801690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.368 qpair failed and we were unable to recover it. 00:34:50.368 [2024-07-14 09:44:34.811438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.368 [2024-07-14 09:44:34.811647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.368 [2024-07-14 09:44:34.811674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.368 [2024-07-14 09:44:34.811688] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.368 [2024-07-14 09:44:34.811700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.368 [2024-07-14 09:44:34.811740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.368 qpair failed and we were unable to recover it. 00:34:50.627 [2024-07-14 09:44:34.821493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.627 [2024-07-14 09:44:34.821680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.627 [2024-07-14 09:44:34.821708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.627 [2024-07-14 09:44:34.821723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.627 [2024-07-14 09:44:34.821749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.627 [2024-07-14 09:44:34.821780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.627 qpair failed and we were unable to recover it. 00:34:50.627 [2024-07-14 09:44:34.831490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.627 [2024-07-14 09:44:34.831651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.627 [2024-07-14 09:44:34.831677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.627 [2024-07-14 09:44:34.831692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.627 [2024-07-14 09:44:34.831718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.627 [2024-07-14 09:44:34.831749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.627 qpair failed and we were unable to recover it. 00:34:50.627 [2024-07-14 09:44:34.841509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.627 [2024-07-14 09:44:34.841670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.627 [2024-07-14 09:44:34.841696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.627 [2024-07-14 09:44:34.841710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.627 [2024-07-14 09:44:34.841722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.627 [2024-07-14 09:44:34.841751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.627 qpair failed and we were unable to recover it. 00:34:50.627 [2024-07-14 09:44:34.851551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.627 [2024-07-14 09:44:34.851785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.627 [2024-07-14 09:44:34.851810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.627 [2024-07-14 09:44:34.851825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.627 [2024-07-14 09:44:34.851836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.627 [2024-07-14 09:44:34.851887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.627 qpair failed and we were unable to recover it. 00:34:50.627 [2024-07-14 09:44:34.861547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.627 [2024-07-14 09:44:34.861706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.627 [2024-07-14 09:44:34.861732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.627 [2024-07-14 09:44:34.861746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.627 [2024-07-14 09:44:34.861758] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.627 [2024-07-14 09:44:34.861787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.627 qpair failed and we were unable to recover it. 00:34:50.627 [2024-07-14 09:44:34.871613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.627 [2024-07-14 09:44:34.871767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.627 [2024-07-14 09:44:34.871793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.627 [2024-07-14 09:44:34.871807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.627 [2024-07-14 09:44:34.871819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.627 [2024-07-14 09:44:34.871848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.627 qpair failed and we were unable to recover it. 00:34:50.627 [2024-07-14 09:44:34.881643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.627 [2024-07-14 09:44:34.881845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.627 [2024-07-14 09:44:34.881880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.627 [2024-07-14 09:44:34.881906] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.627 [2024-07-14 09:44:34.881921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.627 [2024-07-14 09:44:34.881954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.627 qpair failed and we were unable to recover it. 00:34:50.627 [2024-07-14 09:44:34.891653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.627 [2024-07-14 09:44:34.891829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.627 [2024-07-14 09:44:34.891856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.627 [2024-07-14 09:44:34.891880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.628 [2024-07-14 09:44:34.891895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.628 [2024-07-14 09:44:34.891925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.628 qpair failed and we were unable to recover it. 00:34:50.628 [2024-07-14 09:44:34.901649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.628 [2024-07-14 09:44:34.901802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.628 [2024-07-14 09:44:34.901828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.628 [2024-07-14 09:44:34.901843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.628 [2024-07-14 09:44:34.901855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.628 [2024-07-14 09:44:34.901894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.628 qpair failed and we were unable to recover it. 00:34:50.628 [2024-07-14 09:44:34.911699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.628 [2024-07-14 09:44:34.911856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.628 [2024-07-14 09:44:34.911893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.628 [2024-07-14 09:44:34.911908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.628 [2024-07-14 09:44:34.911920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.628 [2024-07-14 09:44:34.911950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.628 qpair failed and we were unable to recover it. 00:34:50.628 [2024-07-14 09:44:34.921745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.628 [2024-07-14 09:44:34.921920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.628 [2024-07-14 09:44:34.921945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.628 [2024-07-14 09:44:34.921959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.628 [2024-07-14 09:44:34.921972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.628 [2024-07-14 09:44:34.922001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.628 qpair failed and we were unable to recover it. 00:34:50.628 [2024-07-14 09:44:34.931772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.628 [2024-07-14 09:44:34.931952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.628 [2024-07-14 09:44:34.931987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.628 [2024-07-14 09:44:34.932002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.628 [2024-07-14 09:44:34.932014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.628 [2024-07-14 09:44:34.932045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.628 qpair failed and we were unable to recover it. 00:34:50.628 [2024-07-14 09:44:34.941797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.628 [2024-07-14 09:44:34.941959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.628 [2024-07-14 09:44:34.941985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.628 [2024-07-14 09:44:34.941999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.628 [2024-07-14 09:44:34.942011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.628 [2024-07-14 09:44:34.942041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.628 qpair failed and we were unable to recover it. 00:34:50.628 [2024-07-14 09:44:34.951943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.628 [2024-07-14 09:44:34.952107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.628 [2024-07-14 09:44:34.952133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.628 [2024-07-14 09:44:34.952148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.628 [2024-07-14 09:44:34.952160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.628 [2024-07-14 09:44:34.952190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.628 qpair failed and we were unable to recover it. 00:34:50.628 [2024-07-14 09:44:34.961880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.628 [2024-07-14 09:44:34.962086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.628 [2024-07-14 09:44:34.962111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.628 [2024-07-14 09:44:34.962126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.628 [2024-07-14 09:44:34.962138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.628 [2024-07-14 09:44:34.962168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.628 qpair failed and we were unable to recover it. 00:34:50.628 [2024-07-14 09:44:34.971937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.628 [2024-07-14 09:44:34.972156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.628 [2024-07-14 09:44:34.972207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.628 [2024-07-14 09:44:34.972222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.628 [2024-07-14 09:44:34.972234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.628 [2024-07-14 09:44:34.972262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.628 qpair failed and we were unable to recover it. 00:34:50.628 [2024-07-14 09:44:34.981905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.628 [2024-07-14 09:44:34.982071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.628 [2024-07-14 09:44:34.982096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.628 [2024-07-14 09:44:34.982110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.628 [2024-07-14 09:44:34.982122] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.628 [2024-07-14 09:44:34.982151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.628 qpair failed and we were unable to recover it. 00:34:50.628 [2024-07-14 09:44:34.991946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.628 [2024-07-14 09:44:34.992105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.628 [2024-07-14 09:44:34.992130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.628 [2024-07-14 09:44:34.992145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.628 [2024-07-14 09:44:34.992157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.628 [2024-07-14 09:44:34.992186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.628 qpair failed and we were unable to recover it. 00:34:50.628 [2024-07-14 09:44:35.002000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.628 [2024-07-14 09:44:35.002165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.628 [2024-07-14 09:44:35.002190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.628 [2024-07-14 09:44:35.002204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.628 [2024-07-14 09:44:35.002231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.628 [2024-07-14 09:44:35.002263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.628 qpair failed and we were unable to recover it. 00:34:50.628 [2024-07-14 09:44:35.012100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.628 [2024-07-14 09:44:35.012289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.628 [2024-07-14 09:44:35.012315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.628 [2024-07-14 09:44:35.012332] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.628 [2024-07-14 09:44:35.012344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.628 [2024-07-14 09:44:35.012393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.628 qpair failed and we were unable to recover it. 00:34:50.628 [2024-07-14 09:44:35.022038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.628 [2024-07-14 09:44:35.022209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.628 [2024-07-14 09:44:35.022235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.628 [2024-07-14 09:44:35.022250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.628 [2024-07-14 09:44:35.022276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.628 [2024-07-14 09:44:35.022308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.628 qpair failed and we were unable to recover it. 00:34:50.628 [2024-07-14 09:44:35.032062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.628 [2024-07-14 09:44:35.032252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.628 [2024-07-14 09:44:35.032293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.628 [2024-07-14 09:44:35.032307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.628 [2024-07-14 09:44:35.032319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.628 [2024-07-14 09:44:35.032363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.628 qpair failed and we were unable to recover it. 00:34:50.629 [2024-07-14 09:44:35.042088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.629 [2024-07-14 09:44:35.042253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.629 [2024-07-14 09:44:35.042278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.629 [2024-07-14 09:44:35.042293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.629 [2024-07-14 09:44:35.042305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.629 [2024-07-14 09:44:35.042334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.629 qpair failed and we were unable to recover it. 00:34:50.629 [2024-07-14 09:44:35.052132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.629 [2024-07-14 09:44:35.052303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.629 [2024-07-14 09:44:35.052328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.629 [2024-07-14 09:44:35.052343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.629 [2024-07-14 09:44:35.052355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.629 [2024-07-14 09:44:35.052384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.629 qpair failed and we were unable to recover it. 00:34:50.629 [2024-07-14 09:44:35.062152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.629 [2024-07-14 09:44:35.062319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.629 [2024-07-14 09:44:35.062349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.629 [2024-07-14 09:44:35.062364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.629 [2024-07-14 09:44:35.062391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.629 [2024-07-14 09:44:35.062422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.629 qpair failed and we were unable to recover it. 00:34:50.629 [2024-07-14 09:44:35.072170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.629 [2024-07-14 09:44:35.072328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.629 [2024-07-14 09:44:35.072354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.629 [2024-07-14 09:44:35.072368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.629 [2024-07-14 09:44:35.072380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.629 [2024-07-14 09:44:35.072409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.629 qpair failed and we were unable to recover it. 00:34:50.887 [2024-07-14 09:44:35.082210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.887 [2024-07-14 09:44:35.082373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.887 [2024-07-14 09:44:35.082398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.887 [2024-07-14 09:44:35.082413] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.887 [2024-07-14 09:44:35.082425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.887 [2024-07-14 09:44:35.082454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.887 qpair failed and we were unable to recover it. 00:34:50.887 [2024-07-14 09:44:35.092241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.887 [2024-07-14 09:44:35.092451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.887 [2024-07-14 09:44:35.092476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.887 [2024-07-14 09:44:35.092491] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.887 [2024-07-14 09:44:35.092503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.887 [2024-07-14 09:44:35.092532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.887 qpair failed and we were unable to recover it. 00:34:50.887 [2024-07-14 09:44:35.102313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.887 [2024-07-14 09:44:35.102495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.887 [2024-07-14 09:44:35.102521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.887 [2024-07-14 09:44:35.102550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.887 [2024-07-14 09:44:35.102562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.887 [2024-07-14 09:44:35.102614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.887 qpair failed and we were unable to recover it. 00:34:50.887 [2024-07-14 09:44:35.112319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.887 [2024-07-14 09:44:35.112485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.887 [2024-07-14 09:44:35.112511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.887 [2024-07-14 09:44:35.112525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.887 [2024-07-14 09:44:35.112537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.887 [2024-07-14 09:44:35.112581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.887 qpair failed and we were unable to recover it. 00:34:50.887 [2024-07-14 09:44:35.122358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.887 [2024-07-14 09:44:35.122523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.887 [2024-07-14 09:44:35.122549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.887 [2024-07-14 09:44:35.122564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.887 [2024-07-14 09:44:35.122576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.887 [2024-07-14 09:44:35.122605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.888 qpair failed and we were unable to recover it. 00:34:50.888 [2024-07-14 09:44:35.132365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.888 [2024-07-14 09:44:35.132535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.888 [2024-07-14 09:44:35.132561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.888 [2024-07-14 09:44:35.132575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.888 [2024-07-14 09:44:35.132588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.888 [2024-07-14 09:44:35.132632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.888 qpair failed and we were unable to recover it. 00:34:50.888 [2024-07-14 09:44:35.142391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.888 [2024-07-14 09:44:35.142554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.888 [2024-07-14 09:44:35.142578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.888 [2024-07-14 09:44:35.142592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.888 [2024-07-14 09:44:35.142605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.888 [2024-07-14 09:44:35.142649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.888 qpair failed and we were unable to recover it. 00:34:50.888 [2024-07-14 09:44:35.152374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.888 [2024-07-14 09:44:35.152536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.888 [2024-07-14 09:44:35.152562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.888 [2024-07-14 09:44:35.152577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.888 [2024-07-14 09:44:35.152589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.888 [2024-07-14 09:44:35.152618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.888 qpair failed and we were unable to recover it. 00:34:50.888 [2024-07-14 09:44:35.162470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.888 [2024-07-14 09:44:35.162639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.888 [2024-07-14 09:44:35.162664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.888 [2024-07-14 09:44:35.162678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.888 [2024-07-14 09:44:35.162706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.888 [2024-07-14 09:44:35.162735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.888 qpair failed and we were unable to recover it. 00:34:50.888 [2024-07-14 09:44:35.172459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.888 [2024-07-14 09:44:35.172667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.888 [2024-07-14 09:44:35.172693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.888 [2024-07-14 09:44:35.172707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.888 [2024-07-14 09:44:35.172719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.888 [2024-07-14 09:44:35.172748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.888 qpair failed and we were unable to recover it. 00:34:50.888 [2024-07-14 09:44:35.182532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.888 [2024-07-14 09:44:35.182696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.888 [2024-07-14 09:44:35.182722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.888 [2024-07-14 09:44:35.182737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.888 [2024-07-14 09:44:35.182749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.888 [2024-07-14 09:44:35.182793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.888 qpair failed and we were unable to recover it. 00:34:50.888 [2024-07-14 09:44:35.192521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.888 [2024-07-14 09:44:35.192688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.888 [2024-07-14 09:44:35.192713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.888 [2024-07-14 09:44:35.192728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.888 [2024-07-14 09:44:35.192761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.888 [2024-07-14 09:44:35.192792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.888 qpair failed and we were unable to recover it. 00:34:50.888 [2024-07-14 09:44:35.202550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.888 [2024-07-14 09:44:35.202720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.888 [2024-07-14 09:44:35.202745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.888 [2024-07-14 09:44:35.202760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.888 [2024-07-14 09:44:35.202772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.888 [2024-07-14 09:44:35.202805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.888 qpair failed and we were unable to recover it. 00:34:50.888 [2024-07-14 09:44:35.212565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.888 [2024-07-14 09:44:35.212745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.888 [2024-07-14 09:44:35.212771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.888 [2024-07-14 09:44:35.212786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.888 [2024-07-14 09:44:35.212798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.888 [2024-07-14 09:44:35.212842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.888 qpair failed and we were unable to recover it. 00:34:50.888 [2024-07-14 09:44:35.222677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.888 [2024-07-14 09:44:35.222854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.888 [2024-07-14 09:44:35.222885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.888 [2024-07-14 09:44:35.222916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.888 [2024-07-14 09:44:35.222928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.888 [2024-07-14 09:44:35.222961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.888 qpair failed and we were unable to recover it. 00:34:50.888 [2024-07-14 09:44:35.232659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.888 [2024-07-14 09:44:35.232838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.888 [2024-07-14 09:44:35.232871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.888 [2024-07-14 09:44:35.232889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.888 [2024-07-14 09:44:35.232905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.888 [2024-07-14 09:44:35.232947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.888 qpair failed and we were unable to recover it. 00:34:50.888 [2024-07-14 09:44:35.242694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.888 [2024-07-14 09:44:35.242879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.888 [2024-07-14 09:44:35.242905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.888 [2024-07-14 09:44:35.242919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.888 [2024-07-14 09:44:35.242931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.888 [2024-07-14 09:44:35.242961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.888 qpair failed and we were unable to recover it. 00:34:50.888 [2024-07-14 09:44:35.252710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.888 [2024-07-14 09:44:35.252884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.888 [2024-07-14 09:44:35.252910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.888 [2024-07-14 09:44:35.252925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.888 [2024-07-14 09:44:35.252937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.888 [2024-07-14 09:44:35.252967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.888 qpair failed and we were unable to recover it. 00:34:50.888 [2024-07-14 09:44:35.262719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.888 [2024-07-14 09:44:35.262910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.888 [2024-07-14 09:44:35.262936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.888 [2024-07-14 09:44:35.262951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.888 [2024-07-14 09:44:35.262963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.888 [2024-07-14 09:44:35.262992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.888 qpair failed and we were unable to recover it. 00:34:50.888 [2024-07-14 09:44:35.272743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.888 [2024-07-14 09:44:35.272911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.889 [2024-07-14 09:44:35.272936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.889 [2024-07-14 09:44:35.272950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.889 [2024-07-14 09:44:35.272962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.889 [2024-07-14 09:44:35.272992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.889 qpair failed and we were unable to recover it. 00:34:50.889 [2024-07-14 09:44:35.282791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.889 [2024-07-14 09:44:35.282965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.889 [2024-07-14 09:44:35.282991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.889 [2024-07-14 09:44:35.283010] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.889 [2024-07-14 09:44:35.283024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.889 [2024-07-14 09:44:35.283054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.889 qpair failed and we were unable to recover it. 00:34:50.889 [2024-07-14 09:44:35.292805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.889 [2024-07-14 09:44:35.292979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.889 [2024-07-14 09:44:35.293005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.889 [2024-07-14 09:44:35.293019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.889 [2024-07-14 09:44:35.293031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.889 [2024-07-14 09:44:35.293060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.889 qpair failed and we were unable to recover it. 00:34:50.889 [2024-07-14 09:44:35.302831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.889 [2024-07-14 09:44:35.303036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.889 [2024-07-14 09:44:35.303062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.889 [2024-07-14 09:44:35.303076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.889 [2024-07-14 09:44:35.303089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.889 [2024-07-14 09:44:35.303122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.889 qpair failed and we were unable to recover it. 00:34:50.889 [2024-07-14 09:44:35.312852] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.889 [2024-07-14 09:44:35.313019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.889 [2024-07-14 09:44:35.313044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.889 [2024-07-14 09:44:35.313059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.889 [2024-07-14 09:44:35.313070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.889 [2024-07-14 09:44:35.313099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.889 qpair failed and we were unable to recover it. 00:34:50.889 [2024-07-14 09:44:35.322947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.889 [2024-07-14 09:44:35.323112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.889 [2024-07-14 09:44:35.323138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.889 [2024-07-14 09:44:35.323153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.889 [2024-07-14 09:44:35.323165] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.889 [2024-07-14 09:44:35.323193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.889 qpair failed and we were unable to recover it. 00:34:50.889 [2024-07-14 09:44:35.332949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.889 [2024-07-14 09:44:35.333135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.889 [2024-07-14 09:44:35.333160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.889 [2024-07-14 09:44:35.333175] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.889 [2024-07-14 09:44:35.333202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:50.889 [2024-07-14 09:44:35.333230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:50.889 qpair failed and we were unable to recover it. 00:34:51.148 [2024-07-14 09:44:35.343020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.148 [2024-07-14 09:44:35.343179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.148 [2024-07-14 09:44:35.343206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.148 [2024-07-14 09:44:35.343221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.148 [2024-07-14 09:44:35.343233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.148 [2024-07-14 09:44:35.343277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.148 qpair failed and we were unable to recover it. 00:34:51.148 [2024-07-14 09:44:35.353092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.148 [2024-07-14 09:44:35.353268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.148 [2024-07-14 09:44:35.353293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.148 [2024-07-14 09:44:35.353307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.148 [2024-07-14 09:44:35.353319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.148 [2024-07-14 09:44:35.353362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.148 qpair failed and we were unable to recover it. 00:34:51.148 [2024-07-14 09:44:35.363032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.148 [2024-07-14 09:44:35.363213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.148 [2024-07-14 09:44:35.363238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.148 [2024-07-14 09:44:35.363254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.148 [2024-07-14 09:44:35.363266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.148 [2024-07-14 09:44:35.363307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.148 qpair failed and we were unable to recover it. 00:34:51.148 [2024-07-14 09:44:35.373042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.148 [2024-07-14 09:44:35.373256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.148 [2024-07-14 09:44:35.373283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.148 [2024-07-14 09:44:35.373302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.148 [2024-07-14 09:44:35.373315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.148 [2024-07-14 09:44:35.373344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.148 qpair failed and we were unable to recover it. 00:34:51.148 [2024-07-14 09:44:35.383087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.148 [2024-07-14 09:44:35.383249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.148 [2024-07-14 09:44:35.383275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.148 [2024-07-14 09:44:35.383289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.148 [2024-07-14 09:44:35.383321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.148 [2024-07-14 09:44:35.383351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.148 qpair failed and we were unable to recover it. 00:34:51.148 [2024-07-14 09:44:35.393126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.148 [2024-07-14 09:44:35.393345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.148 [2024-07-14 09:44:35.393371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.148 [2024-07-14 09:44:35.393386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.148 [2024-07-14 09:44:35.393398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.148 [2024-07-14 09:44:35.393427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.148 qpair failed and we were unable to recover it. 00:34:51.148 [2024-07-14 09:44:35.403124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.148 [2024-07-14 09:44:35.403306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.148 [2024-07-14 09:44:35.403332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.148 [2024-07-14 09:44:35.403346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.148 [2024-07-14 09:44:35.403359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.148 [2024-07-14 09:44:35.403388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.148 qpair failed and we were unable to recover it. 00:34:51.148 [2024-07-14 09:44:35.413174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.148 [2024-07-14 09:44:35.413416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.148 [2024-07-14 09:44:35.413441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.148 [2024-07-14 09:44:35.413455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.148 [2024-07-14 09:44:35.413467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.149 [2024-07-14 09:44:35.413510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.149 qpair failed and we were unable to recover it. 00:34:51.149 [2024-07-14 09:44:35.423228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.149 [2024-07-14 09:44:35.423475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.149 [2024-07-14 09:44:35.423504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.149 [2024-07-14 09:44:35.423517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.149 [2024-07-14 09:44:35.423530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.149 [2024-07-14 09:44:35.423572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.149 qpair failed and we were unable to recover it. 00:34:51.149 [2024-07-14 09:44:35.433207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.149 [2024-07-14 09:44:35.433393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.149 [2024-07-14 09:44:35.433419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.149 [2024-07-14 09:44:35.433433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.149 [2024-07-14 09:44:35.433445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.149 [2024-07-14 09:44:35.433474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.149 qpair failed and we were unable to recover it. 00:34:51.149 [2024-07-14 09:44:35.443366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.149 [2024-07-14 09:44:35.443556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.149 [2024-07-14 09:44:35.443598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.149 [2024-07-14 09:44:35.443612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.149 [2024-07-14 09:44:35.443624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.149 [2024-07-14 09:44:35.443667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.149 qpair failed and we were unable to recover it. 00:34:51.149 [2024-07-14 09:44:35.453323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.149 [2024-07-14 09:44:35.453550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.149 [2024-07-14 09:44:35.453574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.149 [2024-07-14 09:44:35.453588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.149 [2024-07-14 09:44:35.453600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.149 [2024-07-14 09:44:35.453641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.149 qpair failed and we were unable to recover it. 00:34:51.149 [2024-07-14 09:44:35.463280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.149 [2024-07-14 09:44:35.463438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.149 [2024-07-14 09:44:35.463469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.149 [2024-07-14 09:44:35.463484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.149 [2024-07-14 09:44:35.463496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.149 [2024-07-14 09:44:35.463525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.149 qpair failed and we were unable to recover it. 00:34:51.149 [2024-07-14 09:44:35.473315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.149 [2024-07-14 09:44:35.473473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.149 [2024-07-14 09:44:35.473499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.149 [2024-07-14 09:44:35.473513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.149 [2024-07-14 09:44:35.473525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.149 [2024-07-14 09:44:35.473566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.149 qpair failed and we were unable to recover it. 00:34:51.149 [2024-07-14 09:44:35.483383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.149 [2024-07-14 09:44:35.483553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.149 [2024-07-14 09:44:35.483578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.149 [2024-07-14 09:44:35.483593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.149 [2024-07-14 09:44:35.483620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.149 [2024-07-14 09:44:35.483649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.149 qpair failed and we were unable to recover it. 00:34:51.149 [2024-07-14 09:44:35.493390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.149 [2024-07-14 09:44:35.493559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.149 [2024-07-14 09:44:35.493584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.149 [2024-07-14 09:44:35.493599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.149 [2024-07-14 09:44:35.493611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.149 [2024-07-14 09:44:35.493640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.149 qpair failed and we were unable to recover it. 00:34:51.149 [2024-07-14 09:44:35.503426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.149 [2024-07-14 09:44:35.503578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.149 [2024-07-14 09:44:35.503604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.149 [2024-07-14 09:44:35.503619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.149 [2024-07-14 09:44:35.503631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.149 [2024-07-14 09:44:35.503681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.149 qpair failed and we were unable to recover it. 00:34:51.149 [2024-07-14 09:44:35.513403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.149 [2024-07-14 09:44:35.513555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.149 [2024-07-14 09:44:35.513580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.149 [2024-07-14 09:44:35.513594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.149 [2024-07-14 09:44:35.513607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.149 [2024-07-14 09:44:35.513636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.149 qpair failed and we were unable to recover it. 00:34:51.149 [2024-07-14 09:44:35.523524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.149 [2024-07-14 09:44:35.523708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.149 [2024-07-14 09:44:35.523748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.149 [2024-07-14 09:44:35.523763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.149 [2024-07-14 09:44:35.523774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.149 [2024-07-14 09:44:35.523818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.149 qpair failed and we were unable to recover it. 00:34:51.149 [2024-07-14 09:44:35.533481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.149 [2024-07-14 09:44:35.533691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.149 [2024-07-14 09:44:35.533717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.149 [2024-07-14 09:44:35.533731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.149 [2024-07-14 09:44:35.533743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.149 [2024-07-14 09:44:35.533772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.149 qpair failed and we were unable to recover it. 00:34:51.149 [2024-07-14 09:44:35.543525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.149 [2024-07-14 09:44:35.543695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.149 [2024-07-14 09:44:35.543720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.149 [2024-07-14 09:44:35.543734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.149 [2024-07-14 09:44:35.543747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.149 [2024-07-14 09:44:35.543776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.149 qpair failed and we were unable to recover it. 00:34:51.149 [2024-07-14 09:44:35.553525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.149 [2024-07-14 09:44:35.553680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.149 [2024-07-14 09:44:35.553711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.149 [2024-07-14 09:44:35.553726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.149 [2024-07-14 09:44:35.553738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.149 [2024-07-14 09:44:35.553768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.149 qpair failed and we were unable to recover it. 00:34:51.149 [2024-07-14 09:44:35.563619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.149 [2024-07-14 09:44:35.563850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.150 [2024-07-14 09:44:35.563881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.150 [2024-07-14 09:44:35.563911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.150 [2024-07-14 09:44:35.563924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.150 [2024-07-14 09:44:35.563956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.150 qpair failed and we were unable to recover it. 00:34:51.150 [2024-07-14 09:44:35.573594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.150 [2024-07-14 09:44:35.573797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.150 [2024-07-14 09:44:35.573839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.150 [2024-07-14 09:44:35.573857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.150 [2024-07-14 09:44:35.573897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.150 [2024-07-14 09:44:35.573930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.150 qpair failed and we were unable to recover it. 00:34:51.150 [2024-07-14 09:44:35.583637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.150 [2024-07-14 09:44:35.583792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.150 [2024-07-14 09:44:35.583819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.150 [2024-07-14 09:44:35.583833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.150 [2024-07-14 09:44:35.583845] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.150 [2024-07-14 09:44:35.583885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.150 qpair failed and we were unable to recover it. 00:34:51.150 [2024-07-14 09:44:35.593667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.150 [2024-07-14 09:44:35.593831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.150 [2024-07-14 09:44:35.593857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.150 [2024-07-14 09:44:35.593879] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.150 [2024-07-14 09:44:35.593898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.150 [2024-07-14 09:44:35.593929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.150 qpair failed and we were unable to recover it. 00:34:51.409 [2024-07-14 09:44:35.603716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.409 [2024-07-14 09:44:35.603928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.409 [2024-07-14 09:44:35.603954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.409 [2024-07-14 09:44:35.603969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.409 [2024-07-14 09:44:35.603981] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.409 [2024-07-14 09:44:35.604010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.409 qpair failed and we were unable to recover it. 00:34:51.409 [2024-07-14 09:44:35.613700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.409 [2024-07-14 09:44:35.613921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.409 [2024-07-14 09:44:35.613947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.409 [2024-07-14 09:44:35.613961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.409 [2024-07-14 09:44:35.613973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.409 [2024-07-14 09:44:35.614002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.409 qpair failed and we were unable to recover it. 00:34:51.409 [2024-07-14 09:44:35.623734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.409 [2024-07-14 09:44:35.623900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.409 [2024-07-14 09:44:35.623926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.409 [2024-07-14 09:44:35.623940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.409 [2024-07-14 09:44:35.623953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.409 [2024-07-14 09:44:35.623982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.409 qpair failed and we were unable to recover it. 00:34:51.409 [2024-07-14 09:44:35.633782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.409 [2024-07-14 09:44:35.633951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.409 [2024-07-14 09:44:35.633977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.409 [2024-07-14 09:44:35.633991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.409 [2024-07-14 09:44:35.634003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.409 [2024-07-14 09:44:35.634032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.409 qpair failed and we were unable to recover it. 00:34:51.409 [2024-07-14 09:44:35.643786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.409 [2024-07-14 09:44:35.643992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.409 [2024-07-14 09:44:35.644018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.409 [2024-07-14 09:44:35.644033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.409 [2024-07-14 09:44:35.644045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.409 [2024-07-14 09:44:35.644074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.409 qpair failed and we were unable to recover it. 00:34:51.409 [2024-07-14 09:44:35.653811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.409 [2024-07-14 09:44:35.653984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.409 [2024-07-14 09:44:35.654009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.409 [2024-07-14 09:44:35.654024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.409 [2024-07-14 09:44:35.654035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.409 [2024-07-14 09:44:35.654064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.409 qpair failed and we were unable to recover it. 00:34:51.409 [2024-07-14 09:44:35.663823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.409 [2024-07-14 09:44:35.663986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.409 [2024-07-14 09:44:35.664012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.409 [2024-07-14 09:44:35.664026] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.409 [2024-07-14 09:44:35.664039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.409 [2024-07-14 09:44:35.664068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.409 qpair failed and we were unable to recover it. 00:34:51.409 [2024-07-14 09:44:35.673896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.409 [2024-07-14 09:44:35.674077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.409 [2024-07-14 09:44:35.674101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.409 [2024-07-14 09:44:35.674115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.409 [2024-07-14 09:44:35.674127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.410 [2024-07-14 09:44:35.674156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.410 qpair failed and we were unable to recover it. 00:34:51.410 [2024-07-14 09:44:35.683914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.410 [2024-07-14 09:44:35.684081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.410 [2024-07-14 09:44:35.684107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.410 [2024-07-14 09:44:35.684126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.410 [2024-07-14 09:44:35.684140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.410 [2024-07-14 09:44:35.684170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.410 qpair failed and we were unable to recover it. 00:34:51.410 [2024-07-14 09:44:35.693933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.410 [2024-07-14 09:44:35.694100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.410 [2024-07-14 09:44:35.694125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.410 [2024-07-14 09:44:35.694139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.410 [2024-07-14 09:44:35.694151] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.410 [2024-07-14 09:44:35.694180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.410 qpair failed and we were unable to recover it. 00:34:51.410 [2024-07-14 09:44:35.703985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.410 [2024-07-14 09:44:35.704141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.410 [2024-07-14 09:44:35.704167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.410 [2024-07-14 09:44:35.704181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.410 [2024-07-14 09:44:35.704193] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.410 [2024-07-14 09:44:35.704236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.410 qpair failed and we were unable to recover it. 00:34:51.410 [2024-07-14 09:44:35.713969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.410 [2024-07-14 09:44:35.714125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.410 [2024-07-14 09:44:35.714150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.410 [2024-07-14 09:44:35.714165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.410 [2024-07-14 09:44:35.714177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.410 [2024-07-14 09:44:35.714206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.410 qpair failed and we were unable to recover it. 00:34:51.410 [2024-07-14 09:44:35.724000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.410 [2024-07-14 09:44:35.724160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.410 [2024-07-14 09:44:35.724184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.410 [2024-07-14 09:44:35.724199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.410 [2024-07-14 09:44:35.724211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.410 [2024-07-14 09:44:35.724239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.410 qpair failed and we were unable to recover it. 00:34:51.410 [2024-07-14 09:44:35.734077] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.410 [2024-07-14 09:44:35.734245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.410 [2024-07-14 09:44:35.734271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.410 [2024-07-14 09:44:35.734288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.410 [2024-07-14 09:44:35.734316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.410 [2024-07-14 09:44:35.734345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.410 qpair failed and we were unable to recover it. 00:34:51.410 [2024-07-14 09:44:35.744098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.410 [2024-07-14 09:44:35.744266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.410 [2024-07-14 09:44:35.744292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.410 [2024-07-14 09:44:35.744306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.410 [2024-07-14 09:44:35.744318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.410 [2024-07-14 09:44:35.744347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.410 qpair failed and we were unable to recover it. 00:34:51.410 [2024-07-14 09:44:35.754091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.410 [2024-07-14 09:44:35.754241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.410 [2024-07-14 09:44:35.754266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.410 [2024-07-14 09:44:35.754281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.410 [2024-07-14 09:44:35.754293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.410 [2024-07-14 09:44:35.754321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.410 qpair failed and we were unable to recover it. 00:34:51.410 [2024-07-14 09:44:35.764238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.410 [2024-07-14 09:44:35.764401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.410 [2024-07-14 09:44:35.764426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.410 [2024-07-14 09:44:35.764441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.410 [2024-07-14 09:44:35.764453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.410 [2024-07-14 09:44:35.764482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.410 qpair failed and we were unable to recover it. 00:34:51.410 [2024-07-14 09:44:35.774155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.410 [2024-07-14 09:44:35.774324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.410 [2024-07-14 09:44:35.774349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.410 [2024-07-14 09:44:35.774369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.410 [2024-07-14 09:44:35.774382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.410 [2024-07-14 09:44:35.774415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.410 qpair failed and we were unable to recover it. 00:34:51.410 [2024-07-14 09:44:35.784186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.410 [2024-07-14 09:44:35.784344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.410 [2024-07-14 09:44:35.784370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.410 [2024-07-14 09:44:35.784384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.410 [2024-07-14 09:44:35.784396] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.410 [2024-07-14 09:44:35.784424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.410 qpair failed and we were unable to recover it. 00:34:51.410 [2024-07-14 09:44:35.794234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.410 [2024-07-14 09:44:35.794394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.410 [2024-07-14 09:44:35.794419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.410 [2024-07-14 09:44:35.794433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.410 [2024-07-14 09:44:35.794446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.410 [2024-07-14 09:44:35.794476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.410 qpair failed and we were unable to recover it. 00:34:51.410 [2024-07-14 09:44:35.804258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.410 [2024-07-14 09:44:35.804421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.410 [2024-07-14 09:44:35.804446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.410 [2024-07-14 09:44:35.804461] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.410 [2024-07-14 09:44:35.804473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.410 [2024-07-14 09:44:35.804502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.410 qpair failed and we were unable to recover it. 00:34:51.410 [2024-07-14 09:44:35.814316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.410 [2024-07-14 09:44:35.814498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.410 [2024-07-14 09:44:35.814524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.410 [2024-07-14 09:44:35.814552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.410 [2024-07-14 09:44:35.814565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.410 [2024-07-14 09:44:35.814593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.410 qpair failed and we were unable to recover it. 00:34:51.410 [2024-07-14 09:44:35.824327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.411 [2024-07-14 09:44:35.824508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.411 [2024-07-14 09:44:35.824533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.411 [2024-07-14 09:44:35.824563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.411 [2024-07-14 09:44:35.824575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.411 [2024-07-14 09:44:35.824603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.411 qpair failed and we were unable to recover it. 00:34:51.411 [2024-07-14 09:44:35.834318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.411 [2024-07-14 09:44:35.834476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.411 [2024-07-14 09:44:35.834502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.411 [2024-07-14 09:44:35.834516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.411 [2024-07-14 09:44:35.834528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.411 [2024-07-14 09:44:35.834557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.411 qpair failed and we were unable to recover it. 00:34:51.411 [2024-07-14 09:44:35.844360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.411 [2024-07-14 09:44:35.844521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.411 [2024-07-14 09:44:35.844546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.411 [2024-07-14 09:44:35.844560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.411 [2024-07-14 09:44:35.844572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.411 [2024-07-14 09:44:35.844601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.411 qpair failed and we were unable to recover it. 00:34:51.411 [2024-07-14 09:44:35.854453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.411 [2024-07-14 09:44:35.854628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.411 [2024-07-14 09:44:35.854653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.411 [2024-07-14 09:44:35.854682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.411 [2024-07-14 09:44:35.854694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.411 [2024-07-14 09:44:35.854722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.411 qpair failed and we were unable to recover it. 00:34:51.671 [2024-07-14 09:44:35.864380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.671 [2024-07-14 09:44:35.864539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.671 [2024-07-14 09:44:35.864569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.671 [2024-07-14 09:44:35.864585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.671 [2024-07-14 09:44:35.864597] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.671 [2024-07-14 09:44:35.864626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.671 qpair failed and we were unable to recover it. 00:34:51.672 [2024-07-14 09:44:35.874442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.672 [2024-07-14 09:44:35.874602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.672 [2024-07-14 09:44:35.874628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.672 [2024-07-14 09:44:35.874642] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.672 [2024-07-14 09:44:35.874654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.672 [2024-07-14 09:44:35.874698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.672 qpair failed and we were unable to recover it. 00:34:51.672 [2024-07-14 09:44:35.884504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.672 [2024-07-14 09:44:35.884668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.672 [2024-07-14 09:44:35.884693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.672 [2024-07-14 09:44:35.884707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.672 [2024-07-14 09:44:35.884735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.672 [2024-07-14 09:44:35.884764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.672 qpair failed and we were unable to recover it. 00:34:51.672 [2024-07-14 09:44:35.894485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.672 [2024-07-14 09:44:35.894689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.672 [2024-07-14 09:44:35.894714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.672 [2024-07-14 09:44:35.894728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.672 [2024-07-14 09:44:35.894740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.672 [2024-07-14 09:44:35.894769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.672 qpair failed and we were unable to recover it. 00:34:51.672 [2024-07-14 09:44:35.904530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.672 [2024-07-14 09:44:35.904697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.672 [2024-07-14 09:44:35.904723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.672 [2024-07-14 09:44:35.904737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.672 [2024-07-14 09:44:35.904749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.672 [2024-07-14 09:44:35.904785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.672 qpair failed and we were unable to recover it. 00:34:51.672 [2024-07-14 09:44:35.914575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.672 [2024-07-14 09:44:35.914734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.672 [2024-07-14 09:44:35.914760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.672 [2024-07-14 09:44:35.914775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.672 [2024-07-14 09:44:35.914787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.672 [2024-07-14 09:44:35.914830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.672 qpair failed and we were unable to recover it. 00:34:51.672 [2024-07-14 09:44:35.924635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.672 [2024-07-14 09:44:35.924797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.672 [2024-07-14 09:44:35.924822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.672 [2024-07-14 09:44:35.924837] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.672 [2024-07-14 09:44:35.924849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.672 [2024-07-14 09:44:35.924899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.672 qpair failed and we were unable to recover it. 00:34:51.672 [2024-07-14 09:44:35.934608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.672 [2024-07-14 09:44:35.934776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.672 [2024-07-14 09:44:35.934801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.672 [2024-07-14 09:44:35.934815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.672 [2024-07-14 09:44:35.934827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.672 [2024-07-14 09:44:35.934856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.672 qpair failed and we were unable to recover it. 00:34:51.672 [2024-07-14 09:44:35.944706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.672 [2024-07-14 09:44:35.944889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.672 [2024-07-14 09:44:35.944915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.672 [2024-07-14 09:44:35.944929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.672 [2024-07-14 09:44:35.944941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.672 [2024-07-14 09:44:35.944971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.672 qpair failed and we were unable to recover it. 00:34:51.672 [2024-07-14 09:44:35.954678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.672 [2024-07-14 09:44:35.954842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.672 [2024-07-14 09:44:35.954884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.672 [2024-07-14 09:44:35.954900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.672 [2024-07-14 09:44:35.954911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.672 [2024-07-14 09:44:35.954941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.672 qpair failed and we were unable to recover it. 00:34:51.672 [2024-07-14 09:44:35.964712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.672 [2024-07-14 09:44:35.964880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.672 [2024-07-14 09:44:35.964906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.672 [2024-07-14 09:44:35.964920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.672 [2024-07-14 09:44:35.964932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.672 [2024-07-14 09:44:35.964961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.672 qpair failed and we were unable to recover it. 00:34:51.672 [2024-07-14 09:44:35.974784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.672 [2024-07-14 09:44:35.975002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.672 [2024-07-14 09:44:35.975028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.672 [2024-07-14 09:44:35.975042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.672 [2024-07-14 09:44:35.975054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.672 [2024-07-14 09:44:35.975084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.672 qpair failed and we were unable to recover it. 00:34:51.672 [2024-07-14 09:44:35.984760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.672 [2024-07-14 09:44:35.984928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.672 [2024-07-14 09:44:35.984954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.672 [2024-07-14 09:44:35.984968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.672 [2024-07-14 09:44:35.984981] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.672 [2024-07-14 09:44:35.985010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.672 qpair failed and we were unable to recover it. 00:34:51.672 [2024-07-14 09:44:35.994793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.672 [2024-07-14 09:44:35.994997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.672 [2024-07-14 09:44:35.995023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.672 [2024-07-14 09:44:35.995038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.672 [2024-07-14 09:44:35.995058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.672 [2024-07-14 09:44:35.995089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.672 qpair failed and we were unable to recover it. 00:34:51.672 [2024-07-14 09:44:36.004819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.672 [2024-07-14 09:44:36.004990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.672 [2024-07-14 09:44:36.005016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.672 [2024-07-14 09:44:36.005030] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.672 [2024-07-14 09:44:36.005042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.672 [2024-07-14 09:44:36.005072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.672 qpair failed and we were unable to recover it. 00:34:51.672 [2024-07-14 09:44:36.014847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.672 [2024-07-14 09:44:36.015059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.673 [2024-07-14 09:44:36.015085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.673 [2024-07-14 09:44:36.015099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.673 [2024-07-14 09:44:36.015111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.673 [2024-07-14 09:44:36.015140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.673 qpair failed and we were unable to recover it. 00:34:51.673 [2024-07-14 09:44:36.024920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.673 [2024-07-14 09:44:36.025128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.673 [2024-07-14 09:44:36.025168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.673 [2024-07-14 09:44:36.025182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.673 [2024-07-14 09:44:36.025194] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.673 [2024-07-14 09:44:36.025237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.673 qpair failed and we were unable to recover it. 00:34:51.673 [2024-07-14 09:44:36.034911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.673 [2024-07-14 09:44:36.035069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.673 [2024-07-14 09:44:36.035094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.673 [2024-07-14 09:44:36.035109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.673 [2024-07-14 09:44:36.035121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.673 [2024-07-14 09:44:36.035162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.673 qpair failed and we were unable to recover it. 00:34:51.673 [2024-07-14 09:44:36.044952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.673 [2024-07-14 09:44:36.045124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.673 [2024-07-14 09:44:36.045150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.673 [2024-07-14 09:44:36.045164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.673 [2024-07-14 09:44:36.045176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.673 [2024-07-14 09:44:36.045205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.673 qpair failed and we were unable to recover it. 00:34:51.673 [2024-07-14 09:44:36.054968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.673 [2024-07-14 09:44:36.055135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.673 [2024-07-14 09:44:36.055161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.673 [2024-07-14 09:44:36.055175] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.673 [2024-07-14 09:44:36.055187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.673 [2024-07-14 09:44:36.055216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.673 qpair failed and we were unable to recover it. 00:34:51.673 [2024-07-14 09:44:36.065012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.673 [2024-07-14 09:44:36.065185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.673 [2024-07-14 09:44:36.065211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.673 [2024-07-14 09:44:36.065226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.673 [2024-07-14 09:44:36.065238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.673 [2024-07-14 09:44:36.065282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.673 qpair failed and we were unable to recover it. 00:34:51.673 [2024-07-14 09:44:36.075016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.673 [2024-07-14 09:44:36.075175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.673 [2024-07-14 09:44:36.075201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.673 [2024-07-14 09:44:36.075216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.673 [2024-07-14 09:44:36.075228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.673 [2024-07-14 09:44:36.075257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.673 qpair failed and we were unable to recover it. 00:34:51.673 [2024-07-14 09:44:36.085076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.673 [2024-07-14 09:44:36.085238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.673 [2024-07-14 09:44:36.085263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.673 [2024-07-14 09:44:36.085278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.673 [2024-07-14 09:44:36.085296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.673 [2024-07-14 09:44:36.085327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.673 qpair failed and we were unable to recover it. 00:34:51.673 [2024-07-14 09:44:36.095117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.673 [2024-07-14 09:44:36.095286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.673 [2024-07-14 09:44:36.095312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.673 [2024-07-14 09:44:36.095327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.673 [2024-07-14 09:44:36.095339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.673 [2024-07-14 09:44:36.095368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.673 qpair failed and we were unable to recover it. 00:34:51.673 [2024-07-14 09:44:36.105142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.673 [2024-07-14 09:44:36.105311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.673 [2024-07-14 09:44:36.105338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.673 [2024-07-14 09:44:36.105356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.673 [2024-07-14 09:44:36.105370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.673 [2024-07-14 09:44:36.105415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.673 qpair failed and we were unable to recover it. 00:34:51.673 [2024-07-14 09:44:36.115127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.673 [2024-07-14 09:44:36.115284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.673 [2024-07-14 09:44:36.115310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.673 [2024-07-14 09:44:36.115324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.673 [2024-07-14 09:44:36.115336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.673 [2024-07-14 09:44:36.115366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.673 qpair failed and we were unable to recover it. 00:34:51.933 [2024-07-14 09:44:36.125172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.933 [2024-07-14 09:44:36.125338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.933 [2024-07-14 09:44:36.125364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.933 [2024-07-14 09:44:36.125378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.933 [2024-07-14 09:44:36.125405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.933 [2024-07-14 09:44:36.125434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.933 qpair failed and we were unable to recover it. 00:34:51.933 [2024-07-14 09:44:36.135267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.933 [2024-07-14 09:44:36.135470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.933 [2024-07-14 09:44:36.135497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.933 [2024-07-14 09:44:36.135530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.933 [2024-07-14 09:44:36.135543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.933 [2024-07-14 09:44:36.135586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.933 qpair failed and we were unable to recover it. 00:34:51.933 [2024-07-14 09:44:36.145228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.933 [2024-07-14 09:44:36.145380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.933 [2024-07-14 09:44:36.145405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.933 [2024-07-14 09:44:36.145420] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.933 [2024-07-14 09:44:36.145432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.933 [2024-07-14 09:44:36.145464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.933 qpair failed and we were unable to recover it. 00:34:51.933 [2024-07-14 09:44:36.155290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.933 [2024-07-14 09:44:36.155445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.933 [2024-07-14 09:44:36.155470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.933 [2024-07-14 09:44:36.155485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.933 [2024-07-14 09:44:36.155497] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.933 [2024-07-14 09:44:36.155526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.933 qpair failed and we were unable to recover it. 00:34:51.933 [2024-07-14 09:44:36.165295] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.933 [2024-07-14 09:44:36.165456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.933 [2024-07-14 09:44:36.165481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.933 [2024-07-14 09:44:36.165495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.933 [2024-07-14 09:44:36.165508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.933 [2024-07-14 09:44:36.165537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.933 qpair failed and we were unable to recover it. 00:34:51.933 [2024-07-14 09:44:36.175354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.933 [2024-07-14 09:44:36.175556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.933 [2024-07-14 09:44:36.175596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.933 [2024-07-14 09:44:36.175615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.933 [2024-07-14 09:44:36.175627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.933 [2024-07-14 09:44:36.175656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.933 qpair failed and we were unable to recover it. 00:34:51.933 [2024-07-14 09:44:36.185378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.933 [2024-07-14 09:44:36.185540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.933 [2024-07-14 09:44:36.185566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.933 [2024-07-14 09:44:36.185580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.933 [2024-07-14 09:44:36.185593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.933 [2024-07-14 09:44:36.185622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.933 qpair failed and we were unable to recover it. 00:34:51.933 [2024-07-14 09:44:36.195363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.933 [2024-07-14 09:44:36.195538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.933 [2024-07-14 09:44:36.195564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.933 [2024-07-14 09:44:36.195578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.933 [2024-07-14 09:44:36.195590] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.933 [2024-07-14 09:44:36.195619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.933 qpair failed and we were unable to recover it. 00:34:51.933 [2024-07-14 09:44:36.205416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.933 [2024-07-14 09:44:36.205580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.933 [2024-07-14 09:44:36.205605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.933 [2024-07-14 09:44:36.205620] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.933 [2024-07-14 09:44:36.205632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.933 [2024-07-14 09:44:36.205661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.933 qpair failed and we were unable to recover it. 00:34:51.933 [2024-07-14 09:44:36.215430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.933 [2024-07-14 09:44:36.215646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.933 [2024-07-14 09:44:36.215671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.933 [2024-07-14 09:44:36.215686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.933 [2024-07-14 09:44:36.215698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.933 [2024-07-14 09:44:36.215727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.933 qpair failed and we were unable to recover it. 00:34:51.933 [2024-07-14 09:44:36.225452] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.933 [2024-07-14 09:44:36.225613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.933 [2024-07-14 09:44:36.225638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.933 [2024-07-14 09:44:36.225653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.933 [2024-07-14 09:44:36.225665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.933 [2024-07-14 09:44:36.225697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.933 qpair failed and we were unable to recover it. 00:34:51.933 [2024-07-14 09:44:36.235492] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.933 [2024-07-14 09:44:36.235699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.933 [2024-07-14 09:44:36.235724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.933 [2024-07-14 09:44:36.235738] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.933 [2024-07-14 09:44:36.235750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.933 [2024-07-14 09:44:36.235780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.934 qpair failed and we were unable to recover it. 00:34:51.934 [2024-07-14 09:44:36.245620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.934 [2024-07-14 09:44:36.245808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.934 [2024-07-14 09:44:36.245833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.934 [2024-07-14 09:44:36.245862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.934 [2024-07-14 09:44:36.245882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.934 [2024-07-14 09:44:36.245927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.934 qpair failed and we were unable to recover it. 00:34:51.934 [2024-07-14 09:44:36.255599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.934 [2024-07-14 09:44:36.255808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.934 [2024-07-14 09:44:36.255833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.934 [2024-07-14 09:44:36.255847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.934 [2024-07-14 09:44:36.255859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.934 [2024-07-14 09:44:36.255896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.934 qpair failed and we were unable to recover it. 00:34:51.934 [2024-07-14 09:44:36.265654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.934 [2024-07-14 09:44:36.265855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.934 [2024-07-14 09:44:36.265892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.934 [2024-07-14 09:44:36.265908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.934 [2024-07-14 09:44:36.265920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.934 [2024-07-14 09:44:36.265950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.934 qpair failed and we were unable to recover it. 00:34:51.934 [2024-07-14 09:44:36.275630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.934 [2024-07-14 09:44:36.275797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.934 [2024-07-14 09:44:36.275823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.934 [2024-07-14 09:44:36.275837] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.934 [2024-07-14 09:44:36.275849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.934 [2024-07-14 09:44:36.275885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.934 qpair failed and we were unable to recover it. 00:34:51.934 [2024-07-14 09:44:36.285667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.934 [2024-07-14 09:44:36.285831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.934 [2024-07-14 09:44:36.285856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.934 [2024-07-14 09:44:36.285879] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.934 [2024-07-14 09:44:36.285896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.934 [2024-07-14 09:44:36.285926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.934 qpair failed and we were unable to recover it. 00:34:51.934 [2024-07-14 09:44:36.295695] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.934 [2024-07-14 09:44:36.295873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.934 [2024-07-14 09:44:36.295899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.934 [2024-07-14 09:44:36.295913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.934 [2024-07-14 09:44:36.295925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.934 [2024-07-14 09:44:36.295955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.934 qpair failed and we were unable to recover it. 00:34:51.934 [2024-07-14 09:44:36.305698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.934 [2024-07-14 09:44:36.305855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.934 [2024-07-14 09:44:36.305887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.934 [2024-07-14 09:44:36.305902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.934 [2024-07-14 09:44:36.305914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.934 [2024-07-14 09:44:36.305950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.934 qpair failed and we were unable to recover it. 00:34:51.934 [2024-07-14 09:44:36.315745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.934 [2024-07-14 09:44:36.315940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.934 [2024-07-14 09:44:36.315967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.934 [2024-07-14 09:44:36.315985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.934 [2024-07-14 09:44:36.316001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.934 [2024-07-14 09:44:36.316032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.934 qpair failed and we were unable to recover it. 00:34:51.934 [2024-07-14 09:44:36.325780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.934 [2024-07-14 09:44:36.325952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.934 [2024-07-14 09:44:36.325979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.934 [2024-07-14 09:44:36.325993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.934 [2024-07-14 09:44:36.326005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.934 [2024-07-14 09:44:36.326035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.934 qpair failed and we were unable to recover it. 00:34:51.934 [2024-07-14 09:44:36.335825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.934 [2024-07-14 09:44:36.336006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.934 [2024-07-14 09:44:36.336033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.934 [2024-07-14 09:44:36.336052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.934 [2024-07-14 09:44:36.336066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.934 [2024-07-14 09:44:36.336096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.934 qpair failed and we were unable to recover it. 00:34:51.934 [2024-07-14 09:44:36.345808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.934 [2024-07-14 09:44:36.345970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.934 [2024-07-14 09:44:36.345996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.934 [2024-07-14 09:44:36.346010] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.934 [2024-07-14 09:44:36.346022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.934 [2024-07-14 09:44:36.346052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.934 qpair failed and we were unable to recover it. 00:34:51.934 [2024-07-14 09:44:36.355876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.934 [2024-07-14 09:44:36.356038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.934 [2024-07-14 09:44:36.356068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.934 [2024-07-14 09:44:36.356084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.934 [2024-07-14 09:44:36.356096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.934 [2024-07-14 09:44:36.356128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.934 qpair failed and we were unable to recover it. 00:34:51.934 [2024-07-14 09:44:36.365893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.934 [2024-07-14 09:44:36.366058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.934 [2024-07-14 09:44:36.366084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.934 [2024-07-14 09:44:36.366099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.934 [2024-07-14 09:44:36.366111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.934 [2024-07-14 09:44:36.366140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.934 qpair failed and we were unable to recover it. 00:34:51.934 [2024-07-14 09:44:36.375920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:51.934 [2024-07-14 09:44:36.376098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:51.934 [2024-07-14 09:44:36.376123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:51.934 [2024-07-14 09:44:36.376137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:51.934 [2024-07-14 09:44:36.376149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:51.934 [2024-07-14 09:44:36.376178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:51.934 qpair failed and we were unable to recover it. 00:34:52.194 [2024-07-14 09:44:36.385967] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.194 [2024-07-14 09:44:36.386127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.194 [2024-07-14 09:44:36.386152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.194 [2024-07-14 09:44:36.386167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.194 [2024-07-14 09:44:36.386193] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.194 [2024-07-14 09:44:36.386223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.194 qpair failed and we were unable to recover it. 00:34:52.194 [2024-07-14 09:44:36.396010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.194 [2024-07-14 09:44:36.396180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.194 [2024-07-14 09:44:36.396205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.194 [2024-07-14 09:44:36.396220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.194 [2024-07-14 09:44:36.396237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.194 [2024-07-14 09:44:36.396268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.194 qpair failed and we were unable to recover it. 00:34:52.194 [2024-07-14 09:44:36.406033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.194 [2024-07-14 09:44:36.406194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.194 [2024-07-14 09:44:36.406220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.194 [2024-07-14 09:44:36.406235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.194 [2024-07-14 09:44:36.406262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.194 [2024-07-14 09:44:36.406319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.194 qpair failed and we were unable to recover it. 00:34:52.194 [2024-07-14 09:44:36.416044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.194 [2024-07-14 09:44:36.416206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.194 [2024-07-14 09:44:36.416232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.194 [2024-07-14 09:44:36.416246] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.194 [2024-07-14 09:44:36.416258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.194 [2024-07-14 09:44:36.416287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.194 qpair failed and we were unable to recover it. 00:34:52.194 [2024-07-14 09:44:36.426123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.194 [2024-07-14 09:44:36.426284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.194 [2024-07-14 09:44:36.426309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.194 [2024-07-14 09:44:36.426323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.194 [2024-07-14 09:44:36.426335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.194 [2024-07-14 09:44:36.426365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.194 qpair failed and we were unable to recover it. 00:34:52.194 [2024-07-14 09:44:36.436099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.195 [2024-07-14 09:44:36.436266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.195 [2024-07-14 09:44:36.436293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.195 [2024-07-14 09:44:36.436309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.195 [2024-07-14 09:44:36.436321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.195 [2024-07-14 09:44:36.436352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.195 qpair failed and we were unable to recover it. 00:34:52.195 [2024-07-14 09:44:36.446133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.195 [2024-07-14 09:44:36.446301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.195 [2024-07-14 09:44:36.446326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.195 [2024-07-14 09:44:36.446341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.195 [2024-07-14 09:44:36.446353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.195 [2024-07-14 09:44:36.446382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.195 qpair failed and we were unable to recover it. 00:34:52.195 [2024-07-14 09:44:36.456178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.195 [2024-07-14 09:44:36.456355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.195 [2024-07-14 09:44:36.456392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.195 [2024-07-14 09:44:36.456406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.195 [2024-07-14 09:44:36.456418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.195 [2024-07-14 09:44:36.456447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.195 qpair failed and we were unable to recover it. 00:34:52.195 [2024-07-14 09:44:36.466229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.195 [2024-07-14 09:44:36.466394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.195 [2024-07-14 09:44:36.466421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.195 [2024-07-14 09:44:36.466440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.195 [2024-07-14 09:44:36.466453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.195 [2024-07-14 09:44:36.466497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.195 qpair failed and we were unable to recover it. 00:34:52.195 [2024-07-14 09:44:36.476210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.195 [2024-07-14 09:44:36.476372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.195 [2024-07-14 09:44:36.476399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.195 [2024-07-14 09:44:36.476414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.195 [2024-07-14 09:44:36.476446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.195 [2024-07-14 09:44:36.476476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.195 qpair failed and we were unable to recover it. 00:34:52.195 [2024-07-14 09:44:36.486262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.195 [2024-07-14 09:44:36.486450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.195 [2024-07-14 09:44:36.486475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.195 [2024-07-14 09:44:36.486490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.195 [2024-07-14 09:44:36.486508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.195 [2024-07-14 09:44:36.486539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.195 qpair failed and we were unable to recover it. 00:34:52.195 [2024-07-14 09:44:36.496301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.195 [2024-07-14 09:44:36.496470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.195 [2024-07-14 09:44:36.496496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.195 [2024-07-14 09:44:36.496511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.195 [2024-07-14 09:44:36.496523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.195 [2024-07-14 09:44:36.496552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.195 qpair failed and we were unable to recover it. 00:34:52.195 [2024-07-14 09:44:36.506323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.195 [2024-07-14 09:44:36.506488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.195 [2024-07-14 09:44:36.506514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.195 [2024-07-14 09:44:36.506528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.195 [2024-07-14 09:44:36.506541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.195 [2024-07-14 09:44:36.506586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.195 qpair failed and we were unable to recover it. 00:34:52.195 [2024-07-14 09:44:36.516324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.195 [2024-07-14 09:44:36.516483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.195 [2024-07-14 09:44:36.516508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.195 [2024-07-14 09:44:36.516522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.195 [2024-07-14 09:44:36.516535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.195 [2024-07-14 09:44:36.516564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.195 qpair failed and we were unable to recover it. 00:34:52.195 [2024-07-14 09:44:36.526373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.195 [2024-07-14 09:44:36.526534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.195 [2024-07-14 09:44:36.526560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.195 [2024-07-14 09:44:36.526574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.195 [2024-07-14 09:44:36.526587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.195 [2024-07-14 09:44:36.526619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.195 qpair failed and we were unable to recover it. 00:34:52.195 [2024-07-14 09:44:36.536366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.195 [2024-07-14 09:44:36.536539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.195 [2024-07-14 09:44:36.536564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.195 [2024-07-14 09:44:36.536580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.195 [2024-07-14 09:44:36.536592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.195 [2024-07-14 09:44:36.536621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.195 qpair failed and we were unable to recover it. 00:34:52.195 [2024-07-14 09:44:36.546397] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.195 [2024-07-14 09:44:36.546553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.195 [2024-07-14 09:44:36.546579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.195 [2024-07-14 09:44:36.546593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.195 [2024-07-14 09:44:36.546605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.195 [2024-07-14 09:44:36.546634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.195 qpair failed and we were unable to recover it. 00:34:52.195 [2024-07-14 09:44:36.556447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.195 [2024-07-14 09:44:36.556662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.195 [2024-07-14 09:44:36.556687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.195 [2024-07-14 09:44:36.556701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.195 [2024-07-14 09:44:36.556714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.195 [2024-07-14 09:44:36.556743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.195 qpair failed and we were unable to recover it. 00:34:52.195 [2024-07-14 09:44:36.566498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.195 [2024-07-14 09:44:36.566672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.195 [2024-07-14 09:44:36.566698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.195 [2024-07-14 09:44:36.566728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.195 [2024-07-14 09:44:36.566740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.195 [2024-07-14 09:44:36.566786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.195 qpair failed and we were unable to recover it. 00:34:52.195 [2024-07-14 09:44:36.576533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.195 [2024-07-14 09:44:36.576747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.195 [2024-07-14 09:44:36.576787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.195 [2024-07-14 09:44:36.576807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.196 [2024-07-14 09:44:36.576819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.196 [2024-07-14 09:44:36.576862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.196 qpair failed and we were unable to recover it. 00:34:52.196 [2024-07-14 09:44:36.586620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.196 [2024-07-14 09:44:36.586793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.196 [2024-07-14 09:44:36.586831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.196 [2024-07-14 09:44:36.586862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.196 [2024-07-14 09:44:36.586884] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.196 [2024-07-14 09:44:36.586916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.196 qpair failed and we were unable to recover it. 00:34:52.196 [2024-07-14 09:44:36.596555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.196 [2024-07-14 09:44:36.596802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.196 [2024-07-14 09:44:36.596828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.196 [2024-07-14 09:44:36.596858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.196 [2024-07-14 09:44:36.596886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.196 [2024-07-14 09:44:36.596919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.196 qpair failed and we were unable to recover it. 00:34:52.196 [2024-07-14 09:44:36.606597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.196 [2024-07-14 09:44:36.606764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.196 [2024-07-14 09:44:36.606791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.196 [2024-07-14 09:44:36.606806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.196 [2024-07-14 09:44:36.606818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.196 [2024-07-14 09:44:36.606847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.196 qpair failed and we were unable to recover it. 00:34:52.196 [2024-07-14 09:44:36.616645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.196 [2024-07-14 09:44:36.616821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.196 [2024-07-14 09:44:36.616847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.196 [2024-07-14 09:44:36.616861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.196 [2024-07-14 09:44:36.616882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.196 [2024-07-14 09:44:36.616912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.196 qpair failed and we were unable to recover it. 00:34:52.196 [2024-07-14 09:44:36.626623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.196 [2024-07-14 09:44:36.626781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.196 [2024-07-14 09:44:36.626806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.196 [2024-07-14 09:44:36.626820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.196 [2024-07-14 09:44:36.626832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.196 [2024-07-14 09:44:36.626862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.196 qpair failed and we were unable to recover it. 00:34:52.196 [2024-07-14 09:44:36.636679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.196 [2024-07-14 09:44:36.636887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.196 [2024-07-14 09:44:36.636923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.196 [2024-07-14 09:44:36.636937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.196 [2024-07-14 09:44:36.636949] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.196 [2024-07-14 09:44:36.636979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.196 qpair failed and we were unable to recover it. 00:34:52.455 [2024-07-14 09:44:36.646690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.455 [2024-07-14 09:44:36.646871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.455 [2024-07-14 09:44:36.646897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.455 [2024-07-14 09:44:36.646912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.455 [2024-07-14 09:44:36.646924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.455 [2024-07-14 09:44:36.646954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.455 qpair failed and we were unable to recover it. 00:34:52.455 [2024-07-14 09:44:36.656731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.455 [2024-07-14 09:44:36.656913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.455 [2024-07-14 09:44:36.656939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.455 [2024-07-14 09:44:36.656953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.455 [2024-07-14 09:44:36.656965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.455 [2024-07-14 09:44:36.656994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.456 qpair failed and we were unable to recover it. 00:34:52.456 [2024-07-14 09:44:36.666762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.456 [2024-07-14 09:44:36.666943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.456 [2024-07-14 09:44:36.666974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.456 [2024-07-14 09:44:36.666989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.456 [2024-07-14 09:44:36.667002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.456 [2024-07-14 09:44:36.667031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.456 qpair failed and we were unable to recover it. 00:34:52.456 [2024-07-14 09:44:36.676774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.456 [2024-07-14 09:44:36.676942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.456 [2024-07-14 09:44:36.676967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.456 [2024-07-14 09:44:36.676982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.456 [2024-07-14 09:44:36.676993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.456 [2024-07-14 09:44:36.677023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.456 qpair failed and we were unable to recover it. 00:34:52.456 [2024-07-14 09:44:36.686828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.456 [2024-07-14 09:44:36.687044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.456 [2024-07-14 09:44:36.687069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.456 [2024-07-14 09:44:36.687083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.456 [2024-07-14 09:44:36.687096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.456 [2024-07-14 09:44:36.687125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.456 qpair failed and we were unable to recover it. 00:34:52.456 [2024-07-14 09:44:36.696876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.456 [2024-07-14 09:44:36.697091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.456 [2024-07-14 09:44:36.697116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.456 [2024-07-14 09:44:36.697131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.456 [2024-07-14 09:44:36.697143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.456 [2024-07-14 09:44:36.697172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.456 qpair failed and we were unable to recover it. 00:34:52.456 [2024-07-14 09:44:36.706862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.456 [2024-07-14 09:44:36.707024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.456 [2024-07-14 09:44:36.707051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.456 [2024-07-14 09:44:36.707065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.456 [2024-07-14 09:44:36.707077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.456 [2024-07-14 09:44:36.707115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.456 qpair failed and we were unable to recover it. 00:34:52.456 [2024-07-14 09:44:36.716939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.456 [2024-07-14 09:44:36.717100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.456 [2024-07-14 09:44:36.717126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.456 [2024-07-14 09:44:36.717140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.456 [2024-07-14 09:44:36.717152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.456 [2024-07-14 09:44:36.717181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.456 qpair failed and we were unable to recover it. 00:34:52.456 [2024-07-14 09:44:36.726973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.456 [2024-07-14 09:44:36.727182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.456 [2024-07-14 09:44:36.727207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.456 [2024-07-14 09:44:36.727222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.456 [2024-07-14 09:44:36.727234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.456 [2024-07-14 09:44:36.727263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.456 qpair failed and we were unable to recover it. 00:34:52.456 [2024-07-14 09:44:36.736985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.456 [2024-07-14 09:44:36.737173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.456 [2024-07-14 09:44:36.737199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.456 [2024-07-14 09:44:36.737213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.456 [2024-07-14 09:44:36.737225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.456 [2024-07-14 09:44:36.737254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.456 qpair failed and we were unable to recover it. 00:34:52.456 [2024-07-14 09:44:36.746998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.456 [2024-07-14 09:44:36.747158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.456 [2024-07-14 09:44:36.747183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.456 [2024-07-14 09:44:36.747197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.456 [2024-07-14 09:44:36.747210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.456 [2024-07-14 09:44:36.747239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.456 qpair failed and we were unable to recover it. 00:34:52.456 [2024-07-14 09:44:36.757020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.456 [2024-07-14 09:44:36.757185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.456 [2024-07-14 09:44:36.757216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.456 [2024-07-14 09:44:36.757231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.456 [2024-07-14 09:44:36.757243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.456 [2024-07-14 09:44:36.757272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.456 qpair failed and we were unable to recover it. 00:34:52.456 [2024-07-14 09:44:36.767035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.456 [2024-07-14 09:44:36.767197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.456 [2024-07-14 09:44:36.767223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.456 [2024-07-14 09:44:36.767237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.456 [2024-07-14 09:44:36.767249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.456 [2024-07-14 09:44:36.767278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.456 qpair failed and we were unable to recover it. 00:34:52.456 [2024-07-14 09:44:36.777074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.456 [2024-07-14 09:44:36.777252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.456 [2024-07-14 09:44:36.777277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.456 [2024-07-14 09:44:36.777306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.456 [2024-07-14 09:44:36.777318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.456 [2024-07-14 09:44:36.777347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.456 qpair failed and we were unable to recover it. 00:34:52.456 [2024-07-14 09:44:36.787117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.456 [2024-07-14 09:44:36.787288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.456 [2024-07-14 09:44:36.787313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.456 [2024-07-14 09:44:36.787327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.456 [2024-07-14 09:44:36.787339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.456 [2024-07-14 09:44:36.787371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.456 qpair failed and we were unable to recover it. 00:34:52.456 [2024-07-14 09:44:36.797147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.456 [2024-07-14 09:44:36.797306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.456 [2024-07-14 09:44:36.797332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.456 [2024-07-14 09:44:36.797347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.456 [2024-07-14 09:44:36.797376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.456 [2024-07-14 09:44:36.797413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.456 qpair failed and we were unable to recover it. 00:34:52.456 [2024-07-14 09:44:36.807256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.457 [2024-07-14 09:44:36.807421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.457 [2024-07-14 09:44:36.807447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.457 [2024-07-14 09:44:36.807461] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.457 [2024-07-14 09:44:36.807473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.457 [2024-07-14 09:44:36.807502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.457 qpair failed and we were unable to recover it. 00:34:52.457 [2024-07-14 09:44:36.817221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.457 [2024-07-14 09:44:36.817387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.457 [2024-07-14 09:44:36.817412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.457 [2024-07-14 09:44:36.817426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.457 [2024-07-14 09:44:36.817439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.457 [2024-07-14 09:44:36.817468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.457 qpair failed and we were unable to recover it. 00:34:52.457 [2024-07-14 09:44:36.827296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.457 [2024-07-14 09:44:36.827472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.457 [2024-07-14 09:44:36.827499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.457 [2024-07-14 09:44:36.827532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.457 [2024-07-14 09:44:36.827546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.457 [2024-07-14 09:44:36.827576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.457 qpair failed and we were unable to recover it. 00:34:52.457 [2024-07-14 09:44:36.837240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.457 [2024-07-14 09:44:36.837403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.457 [2024-07-14 09:44:36.837429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.457 [2024-07-14 09:44:36.837443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.457 [2024-07-14 09:44:36.837455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.457 [2024-07-14 09:44:36.837485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.457 qpair failed and we were unable to recover it. 00:34:52.457 [2024-07-14 09:44:36.847268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.457 [2024-07-14 09:44:36.847444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.457 [2024-07-14 09:44:36.847470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.457 [2024-07-14 09:44:36.847484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.457 [2024-07-14 09:44:36.847496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.457 [2024-07-14 09:44:36.847526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.457 qpair failed and we were unable to recover it. 00:34:52.457 [2024-07-14 09:44:36.857282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.457 [2024-07-14 09:44:36.857456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.457 [2024-07-14 09:44:36.857481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.457 [2024-07-14 09:44:36.857496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.457 [2024-07-14 09:44:36.857508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.457 [2024-07-14 09:44:36.857537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.457 qpair failed and we were unable to recover it. 00:34:52.457 [2024-07-14 09:44:36.867336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.457 [2024-07-14 09:44:36.867498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.457 [2024-07-14 09:44:36.867524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.457 [2024-07-14 09:44:36.867538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.457 [2024-07-14 09:44:36.867550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.457 [2024-07-14 09:44:36.867579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.457 qpair failed and we were unable to recover it. 00:34:52.457 [2024-07-14 09:44:36.877469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.457 [2024-07-14 09:44:36.877652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.457 [2024-07-14 09:44:36.877677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.457 [2024-07-14 09:44:36.877693] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.457 [2024-07-14 09:44:36.877705] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.457 [2024-07-14 09:44:36.877750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.457 qpair failed and we were unable to recover it. 00:34:52.457 [2024-07-14 09:44:36.887443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.457 [2024-07-14 09:44:36.887609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.457 [2024-07-14 09:44:36.887634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.457 [2024-07-14 09:44:36.887648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.457 [2024-07-14 09:44:36.887669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.457 [2024-07-14 09:44:36.887714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.457 qpair failed and we were unable to recover it. 00:34:52.457 [2024-07-14 09:44:36.897445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.457 [2024-07-14 09:44:36.897619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.457 [2024-07-14 09:44:36.897645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.457 [2024-07-14 09:44:36.897676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.457 [2024-07-14 09:44:36.897689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.457 [2024-07-14 09:44:36.897717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.457 qpair failed and we were unable to recover it. 00:34:52.717 [2024-07-14 09:44:36.907442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.717 [2024-07-14 09:44:36.907642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.717 [2024-07-14 09:44:36.907669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.717 [2024-07-14 09:44:36.907685] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.717 [2024-07-14 09:44:36.907697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.717 [2024-07-14 09:44:36.907739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.717 qpair failed and we were unable to recover it. 00:34:52.717 [2024-07-14 09:44:36.917519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.717 [2024-07-14 09:44:36.917725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.717 [2024-07-14 09:44:36.917767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.717 [2024-07-14 09:44:36.917787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.717 [2024-07-14 09:44:36.917802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.717 [2024-07-14 09:44:36.917848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.717 qpair failed and we were unable to recover it. 00:34:52.717 [2024-07-14 09:44:36.927481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.717 [2024-07-14 09:44:36.927643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.717 [2024-07-14 09:44:36.927669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.717 [2024-07-14 09:44:36.927684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.717 [2024-07-14 09:44:36.927696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.717 [2024-07-14 09:44:36.927725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.717 qpair failed and we were unable to recover it. 00:34:52.717 [2024-07-14 09:44:36.937516] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.717 [2024-07-14 09:44:36.937691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.717 [2024-07-14 09:44:36.937717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.717 [2024-07-14 09:44:36.937732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.717 [2024-07-14 09:44:36.937744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.717 [2024-07-14 09:44:36.937772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.717 qpair failed and we were unable to recover it. 00:34:52.717 [2024-07-14 09:44:36.947579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.717 [2024-07-14 09:44:36.947740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.717 [2024-07-14 09:44:36.947766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.717 [2024-07-14 09:44:36.947780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.717 [2024-07-14 09:44:36.947807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.717 [2024-07-14 09:44:36.947836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.717 qpair failed and we were unable to recover it. 00:34:52.717 [2024-07-14 09:44:36.957616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.717 [2024-07-14 09:44:36.957840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.717 [2024-07-14 09:44:36.957885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.717 [2024-07-14 09:44:36.957900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.717 [2024-07-14 09:44:36.957927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.717 [2024-07-14 09:44:36.957957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.717 qpair failed and we were unable to recover it. 00:34:52.717 [2024-07-14 09:44:36.967586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.717 [2024-07-14 09:44:36.967752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.717 [2024-07-14 09:44:36.967777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.717 [2024-07-14 09:44:36.967791] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.717 [2024-07-14 09:44:36.967803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.717 [2024-07-14 09:44:36.967833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.717 qpair failed and we were unable to recover it. 00:34:52.717 [2024-07-14 09:44:36.977627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.717 [2024-07-14 09:44:36.977800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.717 [2024-07-14 09:44:36.977825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.717 [2024-07-14 09:44:36.977864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.717 [2024-07-14 09:44:36.977886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.717 [2024-07-14 09:44:36.977931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.717 qpair failed and we were unable to recover it. 00:34:52.717 [2024-07-14 09:44:36.987653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.717 [2024-07-14 09:44:36.987860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.717 [2024-07-14 09:44:36.987893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.717 [2024-07-14 09:44:36.987908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.717 [2024-07-14 09:44:36.987920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.717 [2024-07-14 09:44:36.987961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.717 qpair failed and we were unable to recover it. 00:34:52.717 [2024-07-14 09:44:36.997691] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.717 [2024-07-14 09:44:36.997853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.717 [2024-07-14 09:44:36.997887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.717 [2024-07-14 09:44:36.997903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.717 [2024-07-14 09:44:36.997916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.717 [2024-07-14 09:44:36.997945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.717 qpair failed and we were unable to recover it. 00:34:52.717 [2024-07-14 09:44:37.007739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.717 [2024-07-14 09:44:37.007910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.717 [2024-07-14 09:44:37.007936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.717 [2024-07-14 09:44:37.007950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.717 [2024-07-14 09:44:37.007963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.717 [2024-07-14 09:44:37.008004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.717 qpair failed and we were unable to recover it. 00:34:52.717 [2024-07-14 09:44:37.017735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.718 [2024-07-14 09:44:37.017906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.718 [2024-07-14 09:44:37.017931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.718 [2024-07-14 09:44:37.017945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.718 [2024-07-14 09:44:37.017957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.718 [2024-07-14 09:44:37.017987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.718 qpair failed and we were unable to recover it. 00:34:52.718 [2024-07-14 09:44:37.027763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.718 [2024-07-14 09:44:37.027923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.718 [2024-07-14 09:44:37.027949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.718 [2024-07-14 09:44:37.027964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.718 [2024-07-14 09:44:37.027976] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.718 [2024-07-14 09:44:37.028005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.718 qpair failed and we were unable to recover it. 00:34:52.718 [2024-07-14 09:44:37.037791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.718 [2024-07-14 09:44:37.037958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.718 [2024-07-14 09:44:37.037984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.718 [2024-07-14 09:44:37.037998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.718 [2024-07-14 09:44:37.038011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.718 [2024-07-14 09:44:37.038040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.718 qpair failed and we were unable to recover it. 00:34:52.718 [2024-07-14 09:44:37.047934] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.718 [2024-07-14 09:44:37.048112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.718 [2024-07-14 09:44:37.048137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.718 [2024-07-14 09:44:37.048152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.718 [2024-07-14 09:44:37.048164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.718 [2024-07-14 09:44:37.048210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.718 qpair failed and we were unable to recover it. 00:34:52.718 [2024-07-14 09:44:37.057850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.718 [2024-07-14 09:44:37.058036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.718 [2024-07-14 09:44:37.058061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.718 [2024-07-14 09:44:37.058075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.718 [2024-07-14 09:44:37.058087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.718 [2024-07-14 09:44:37.058117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.718 qpair failed and we were unable to recover it. 00:34:52.718 [2024-07-14 09:44:37.067892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.718 [2024-07-14 09:44:37.068100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.718 [2024-07-14 09:44:37.068126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.718 [2024-07-14 09:44:37.068145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.718 [2024-07-14 09:44:37.068159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.718 [2024-07-14 09:44:37.068189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.718 qpair failed and we were unable to recover it. 00:34:52.718 [2024-07-14 09:44:37.077904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.718 [2024-07-14 09:44:37.078065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.718 [2024-07-14 09:44:37.078090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.718 [2024-07-14 09:44:37.078105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.718 [2024-07-14 09:44:37.078117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.718 [2024-07-14 09:44:37.078146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.718 qpair failed and we were unable to recover it. 00:34:52.718 [2024-07-14 09:44:37.087942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.718 [2024-07-14 09:44:37.088103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.718 [2024-07-14 09:44:37.088127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.718 [2024-07-14 09:44:37.088142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.718 [2024-07-14 09:44:37.088154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.718 [2024-07-14 09:44:37.088183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.718 qpair failed and we were unable to recover it. 00:34:52.718 [2024-07-14 09:44:37.098030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.718 [2024-07-14 09:44:37.098200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.718 [2024-07-14 09:44:37.098226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.718 [2024-07-14 09:44:37.098240] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.718 [2024-07-14 09:44:37.098252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.718 [2024-07-14 09:44:37.098296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.718 qpair failed and we were unable to recover it. 00:34:52.718 [2024-07-14 09:44:37.108045] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.718 [2024-07-14 09:44:37.108222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.718 [2024-07-14 09:44:37.108248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.718 [2024-07-14 09:44:37.108262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.718 [2024-07-14 09:44:37.108274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.718 [2024-07-14 09:44:37.108306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.718 qpair failed and we were unable to recover it. 00:34:52.718 [2024-07-14 09:44:37.118061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.718 [2024-07-14 09:44:37.118259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.718 [2024-07-14 09:44:37.118299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.718 [2024-07-14 09:44:37.118314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.718 [2024-07-14 09:44:37.118326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.718 [2024-07-14 09:44:37.118354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.718 qpair failed and we were unable to recover it. 00:34:52.718 [2024-07-14 09:44:37.128054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.718 [2024-07-14 09:44:37.128220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.718 [2024-07-14 09:44:37.128246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.718 [2024-07-14 09:44:37.128260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.718 [2024-07-14 09:44:37.128272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.718 [2024-07-14 09:44:37.128301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.718 qpair failed and we were unable to recover it. 00:34:52.718 [2024-07-14 09:44:37.138084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.718 [2024-07-14 09:44:37.138303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.718 [2024-07-14 09:44:37.138329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.718 [2024-07-14 09:44:37.138343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.718 [2024-07-14 09:44:37.138355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.718 [2024-07-14 09:44:37.138384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.718 qpair failed and we were unable to recover it. 00:34:52.718 [2024-07-14 09:44:37.148125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.718 [2024-07-14 09:44:37.148290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.718 [2024-07-14 09:44:37.148316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.718 [2024-07-14 09:44:37.148330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.718 [2024-07-14 09:44:37.148363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.718 [2024-07-14 09:44:37.148392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.718 qpair failed and we were unable to recover it. 00:34:52.718 [2024-07-14 09:44:37.158150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.718 [2024-07-14 09:44:37.158322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.718 [2024-07-14 09:44:37.158353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.719 [2024-07-14 09:44:37.158369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.719 [2024-07-14 09:44:37.158381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.719 [2024-07-14 09:44:37.158413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.719 qpair failed and we were unable to recover it. 00:34:52.719 [2024-07-14 09:44:37.168161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.978 [2024-07-14 09:44:37.168322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.978 [2024-07-14 09:44:37.168348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.978 [2024-07-14 09:44:37.168363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.978 [2024-07-14 09:44:37.168375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.978 [2024-07-14 09:44:37.168404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.978 qpair failed and we were unable to recover it. 00:34:52.978 [2024-07-14 09:44:37.178311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.978 [2024-07-14 09:44:37.178479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.978 [2024-07-14 09:44:37.178505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.978 [2024-07-14 09:44:37.178519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.978 [2024-07-14 09:44:37.178546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.978 [2024-07-14 09:44:37.178575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.978 qpair failed and we were unable to recover it. 00:34:52.978 [2024-07-14 09:44:37.188238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.978 [2024-07-14 09:44:37.188389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.978 [2024-07-14 09:44:37.188414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.978 [2024-07-14 09:44:37.188428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.978 [2024-07-14 09:44:37.188440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.978 [2024-07-14 09:44:37.188471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.978 qpair failed and we were unable to recover it. 00:34:52.978 [2024-07-14 09:44:37.198239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.978 [2024-07-14 09:44:37.198393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.978 [2024-07-14 09:44:37.198419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.978 [2024-07-14 09:44:37.198433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.978 [2024-07-14 09:44:37.198445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.978 [2024-07-14 09:44:37.198480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.978 qpair failed and we were unable to recover it. 00:34:52.978 [2024-07-14 09:44:37.208314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.978 [2024-07-14 09:44:37.208486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.978 [2024-07-14 09:44:37.208511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.978 [2024-07-14 09:44:37.208526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.978 [2024-07-14 09:44:37.208538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.978 [2024-07-14 09:44:37.208568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.978 qpair failed and we were unable to recover it. 00:34:52.978 [2024-07-14 09:44:37.218296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.978 [2024-07-14 09:44:37.218464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.978 [2024-07-14 09:44:37.218490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.978 [2024-07-14 09:44:37.218504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.978 [2024-07-14 09:44:37.218516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.978 [2024-07-14 09:44:37.218545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.978 qpair failed and we were unable to recover it. 00:34:52.978 [2024-07-14 09:44:37.228359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.978 [2024-07-14 09:44:37.228556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.978 [2024-07-14 09:44:37.228597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.978 [2024-07-14 09:44:37.228611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.978 [2024-07-14 09:44:37.228623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.978 [2024-07-14 09:44:37.228668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.978 qpair failed and we were unable to recover it. 00:34:52.978 [2024-07-14 09:44:37.238355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.978 [2024-07-14 09:44:37.238515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.978 [2024-07-14 09:44:37.238540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.978 [2024-07-14 09:44:37.238554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.978 [2024-07-14 09:44:37.238567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.978 [2024-07-14 09:44:37.238599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.978 qpair failed and we were unable to recover it. 00:34:52.978 [2024-07-14 09:44:37.248398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.978 [2024-07-14 09:44:37.248559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.978 [2024-07-14 09:44:37.248589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.978 [2024-07-14 09:44:37.248604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.978 [2024-07-14 09:44:37.248616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.978 [2024-07-14 09:44:37.248645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.978 qpair failed and we were unable to recover it. 00:34:52.978 [2024-07-14 09:44:37.258418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.978 [2024-07-14 09:44:37.258584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.978 [2024-07-14 09:44:37.258609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.978 [2024-07-14 09:44:37.258623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.978 [2024-07-14 09:44:37.258635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.978 [2024-07-14 09:44:37.258664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.978 qpair failed and we were unable to recover it. 00:34:52.978 [2024-07-14 09:44:37.268469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.978 [2024-07-14 09:44:37.268648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.978 [2024-07-14 09:44:37.268673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.978 [2024-07-14 09:44:37.268687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.978 [2024-07-14 09:44:37.268699] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.979 [2024-07-14 09:44:37.268728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.979 qpair failed and we were unable to recover it. 00:34:52.979 [2024-07-14 09:44:37.278502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.979 [2024-07-14 09:44:37.278671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.979 [2024-07-14 09:44:37.278697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.979 [2024-07-14 09:44:37.278716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.979 [2024-07-14 09:44:37.278744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.979 [2024-07-14 09:44:37.278774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.979 qpair failed and we were unable to recover it. 00:34:52.979 [2024-07-14 09:44:37.288534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.979 [2024-07-14 09:44:37.288699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.979 [2024-07-14 09:44:37.288725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.979 [2024-07-14 09:44:37.288740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.979 [2024-07-14 09:44:37.288772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.979 [2024-07-14 09:44:37.288803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.979 qpair failed and we were unable to recover it. 00:34:52.979 [2024-07-14 09:44:37.298592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.979 [2024-07-14 09:44:37.298790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.979 [2024-07-14 09:44:37.298815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.979 [2024-07-14 09:44:37.298830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.979 [2024-07-14 09:44:37.298842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.979 [2024-07-14 09:44:37.298878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.979 qpair failed and we were unable to recover it. 00:34:52.979 [2024-07-14 09:44:37.308613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.979 [2024-07-14 09:44:37.308781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.979 [2024-07-14 09:44:37.308807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.979 [2024-07-14 09:44:37.308821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.979 [2024-07-14 09:44:37.308833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.979 [2024-07-14 09:44:37.308872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.979 qpair failed and we were unable to recover it. 00:34:52.979 [2024-07-14 09:44:37.318593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.979 [2024-07-14 09:44:37.318749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.979 [2024-07-14 09:44:37.318775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.979 [2024-07-14 09:44:37.318789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.979 [2024-07-14 09:44:37.318801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.979 [2024-07-14 09:44:37.318845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.979 qpair failed and we were unable to recover it. 00:34:52.979 [2024-07-14 09:44:37.328638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.979 [2024-07-14 09:44:37.328885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.979 [2024-07-14 09:44:37.328912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.979 [2024-07-14 09:44:37.328942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.979 [2024-07-14 09:44:37.328954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.979 [2024-07-14 09:44:37.328985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.979 qpair failed and we were unable to recover it. 00:34:52.979 [2024-07-14 09:44:37.338669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.979 [2024-07-14 09:44:37.338884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.979 [2024-07-14 09:44:37.338910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.979 [2024-07-14 09:44:37.338924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.979 [2024-07-14 09:44:37.338936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.979 [2024-07-14 09:44:37.338966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.979 qpair failed and we were unable to recover it. 00:34:52.979 [2024-07-14 09:44:37.348692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.979 [2024-07-14 09:44:37.348854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.979 [2024-07-14 09:44:37.348886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.979 [2024-07-14 09:44:37.348902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.979 [2024-07-14 09:44:37.348914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.979 [2024-07-14 09:44:37.348944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.979 qpair failed and we were unable to recover it. 00:34:52.979 [2024-07-14 09:44:37.358706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.979 [2024-07-14 09:44:37.358863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.979 [2024-07-14 09:44:37.358896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.979 [2024-07-14 09:44:37.358911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.979 [2024-07-14 09:44:37.358923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.979 [2024-07-14 09:44:37.358952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.979 qpair failed and we were unable to recover it. 00:34:52.979 [2024-07-14 09:44:37.368764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.979 [2024-07-14 09:44:37.368926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.979 [2024-07-14 09:44:37.368951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.979 [2024-07-14 09:44:37.368965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.979 [2024-07-14 09:44:37.368977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.979 [2024-07-14 09:44:37.369018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.979 qpair failed and we were unable to recover it. 00:34:52.979 [2024-07-14 09:44:37.378804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.979 [2024-07-14 09:44:37.378981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.979 [2024-07-14 09:44:37.379007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.979 [2024-07-14 09:44:37.379026] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.979 [2024-07-14 09:44:37.379039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.979 [2024-07-14 09:44:37.379069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.979 qpair failed and we were unable to recover it. 00:34:52.979 [2024-07-14 09:44:37.388808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.979 [2024-07-14 09:44:37.388970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.979 [2024-07-14 09:44:37.388995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.979 [2024-07-14 09:44:37.389009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.979 [2024-07-14 09:44:37.389021] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.979 [2024-07-14 09:44:37.389051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.979 qpair failed and we were unable to recover it. 00:34:52.979 [2024-07-14 09:44:37.398907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.979 [2024-07-14 09:44:37.399064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.979 [2024-07-14 09:44:37.399090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.979 [2024-07-14 09:44:37.399104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.979 [2024-07-14 09:44:37.399116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.979 [2024-07-14 09:44:37.399144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.979 qpair failed and we were unable to recover it. 00:34:52.979 [2024-07-14 09:44:37.408897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.979 [2024-07-14 09:44:37.409089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.979 [2024-07-14 09:44:37.409116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.979 [2024-07-14 09:44:37.409135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.979 [2024-07-14 09:44:37.409148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.979 [2024-07-14 09:44:37.409179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.979 qpair failed and we were unable to recover it. 00:34:52.979 [2024-07-14 09:44:37.418890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.980 [2024-07-14 09:44:37.419067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.980 [2024-07-14 09:44:37.419093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.980 [2024-07-14 09:44:37.419107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.980 [2024-07-14 09:44:37.419120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.980 [2024-07-14 09:44:37.419149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.980 qpair failed and we were unable to recover it. 00:34:52.980 [2024-07-14 09:44:37.428939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:52.980 [2024-07-14 09:44:37.429140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:52.980 [2024-07-14 09:44:37.429165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:52.980 [2024-07-14 09:44:37.429178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:52.980 [2024-07-14 09:44:37.429190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:52.980 [2024-07-14 09:44:37.429220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:52.980 qpair failed and we were unable to recover it. 00:34:53.238 [2024-07-14 09:44:37.438961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.239 [2024-07-14 09:44:37.439120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.239 [2024-07-14 09:44:37.439146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.239 [2024-07-14 09:44:37.439160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.239 [2024-07-14 09:44:37.439187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.239 [2024-07-14 09:44:37.439216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.239 qpair failed and we were unable to recover it. 00:34:53.239 [2024-07-14 09:44:37.448980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.239 [2024-07-14 09:44:37.449141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.239 [2024-07-14 09:44:37.449167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.239 [2024-07-14 09:44:37.449181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.239 [2024-07-14 09:44:37.449193] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.239 [2024-07-14 09:44:37.449222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.239 qpair failed and we were unable to recover it. 00:34:53.239 [2024-07-14 09:44:37.459002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.239 [2024-07-14 09:44:37.459165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.239 [2024-07-14 09:44:37.459191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.239 [2024-07-14 09:44:37.459205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.239 [2024-07-14 09:44:37.459217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.239 [2024-07-14 09:44:37.459246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.239 qpair failed and we were unable to recover it. 00:34:53.239 [2024-07-14 09:44:37.469052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.239 [2024-07-14 09:44:37.469210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.239 [2024-07-14 09:44:37.469236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.239 [2024-07-14 09:44:37.469259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.239 [2024-07-14 09:44:37.469272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.239 [2024-07-14 09:44:37.469304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.239 qpair failed and we were unable to recover it. 00:34:53.239 [2024-07-14 09:44:37.479076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.239 [2024-07-14 09:44:37.479235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.239 [2024-07-14 09:44:37.479260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.239 [2024-07-14 09:44:37.479274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.239 [2024-07-14 09:44:37.479286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.239 [2024-07-14 09:44:37.479315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.239 qpair failed and we were unable to recover it. 00:34:53.239 [2024-07-14 09:44:37.489122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.239 [2024-07-14 09:44:37.489318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.239 [2024-07-14 09:44:37.489357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.239 [2024-07-14 09:44:37.489371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.239 [2024-07-14 09:44:37.489383] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.239 [2024-07-14 09:44:37.489427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.239 qpair failed and we were unable to recover it. 00:34:53.239 [2024-07-14 09:44:37.499146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.239 [2024-07-14 09:44:37.499310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.239 [2024-07-14 09:44:37.499335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.239 [2024-07-14 09:44:37.499349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.239 [2024-07-14 09:44:37.499377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.239 [2024-07-14 09:44:37.499405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.239 qpair failed and we were unable to recover it. 00:34:53.239 [2024-07-14 09:44:37.509268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.239 [2024-07-14 09:44:37.509448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.239 [2024-07-14 09:44:37.509475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.239 [2024-07-14 09:44:37.509489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.239 [2024-07-14 09:44:37.509501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.239 [2024-07-14 09:44:37.509530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.239 qpair failed and we were unable to recover it. 00:34:53.239 [2024-07-14 09:44:37.519181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.239 [2024-07-14 09:44:37.519343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.239 [2024-07-14 09:44:37.519368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.239 [2024-07-14 09:44:37.519383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.239 [2024-07-14 09:44:37.519395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.239 [2024-07-14 09:44:37.519424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.239 qpair failed and we were unable to recover it. 00:34:53.239 [2024-07-14 09:44:37.529216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.239 [2024-07-14 09:44:37.529411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.239 [2024-07-14 09:44:37.529436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.239 [2024-07-14 09:44:37.529450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.239 [2024-07-14 09:44:37.529462] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.239 [2024-07-14 09:44:37.529491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.239 qpair failed and we were unable to recover it. 00:34:53.239 [2024-07-14 09:44:37.539234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.239 [2024-07-14 09:44:37.539447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.239 [2024-07-14 09:44:37.539472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.239 [2024-07-14 09:44:37.539486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.239 [2024-07-14 09:44:37.539498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.239 [2024-07-14 09:44:37.539527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.239 qpair failed and we were unable to recover it. 00:34:53.239 [2024-07-14 09:44:37.549271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.239 [2024-07-14 09:44:37.549426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.239 [2024-07-14 09:44:37.549451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.239 [2024-07-14 09:44:37.549465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.239 [2024-07-14 09:44:37.549477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.239 [2024-07-14 09:44:37.549506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.239 qpair failed and we were unable to recover it. 00:34:53.239 [2024-07-14 09:44:37.559282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.239 [2024-07-14 09:44:37.559452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.239 [2024-07-14 09:44:37.559482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.239 [2024-07-14 09:44:37.559497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.239 [2024-07-14 09:44:37.559509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.239 [2024-07-14 09:44:37.559541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.239 qpair failed and we were unable to recover it. 00:34:53.239 [2024-07-14 09:44:37.569363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.239 [2024-07-14 09:44:37.569528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.239 [2024-07-14 09:44:37.569553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.239 [2024-07-14 09:44:37.569567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.239 [2024-07-14 09:44:37.569594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.239 [2024-07-14 09:44:37.569623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.239 qpair failed and we were unable to recover it. 00:34:53.239 [2024-07-14 09:44:37.579351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.239 [2024-07-14 09:44:37.579554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.239 [2024-07-14 09:44:37.579580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.240 [2024-07-14 09:44:37.579595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.240 [2024-07-14 09:44:37.579608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.240 [2024-07-14 09:44:37.579637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.240 qpair failed and we were unable to recover it. 00:34:53.240 [2024-07-14 09:44:37.589384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.240 [2024-07-14 09:44:37.589547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.240 [2024-07-14 09:44:37.589573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.240 [2024-07-14 09:44:37.589588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.240 [2024-07-14 09:44:37.589601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.240 [2024-07-14 09:44:37.589633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.240 qpair failed and we were unable to recover it. 00:34:53.240 [2024-07-14 09:44:37.599439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.240 [2024-07-14 09:44:37.599620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.240 [2024-07-14 09:44:37.599645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.240 [2024-07-14 09:44:37.599674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.240 [2024-07-14 09:44:37.599686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.240 [2024-07-14 09:44:37.599721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.240 qpair failed and we were unable to recover it. 00:34:53.240 [2024-07-14 09:44:37.609457] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.240 [2024-07-14 09:44:37.609688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.240 [2024-07-14 09:44:37.609713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.240 [2024-07-14 09:44:37.609727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.240 [2024-07-14 09:44:37.609738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.240 [2024-07-14 09:44:37.609782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.240 qpair failed and we were unable to recover it. 00:34:53.240 [2024-07-14 09:44:37.619480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.240 [2024-07-14 09:44:37.619693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.240 [2024-07-14 09:44:37.619733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.240 [2024-07-14 09:44:37.619747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.240 [2024-07-14 09:44:37.619759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.240 [2024-07-14 09:44:37.619788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.240 qpair failed and we were unable to recover it. 00:34:53.240 [2024-07-14 09:44:37.629499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.240 [2024-07-14 09:44:37.629656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.240 [2024-07-14 09:44:37.629683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.240 [2024-07-14 09:44:37.629697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.240 [2024-07-14 09:44:37.629709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.240 [2024-07-14 09:44:37.629738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.240 qpair failed and we were unable to recover it. 00:34:53.240 [2024-07-14 09:44:37.639537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.240 [2024-07-14 09:44:37.639735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.240 [2024-07-14 09:44:37.639761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.240 [2024-07-14 09:44:37.639775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.240 [2024-07-14 09:44:37.639787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.240 [2024-07-14 09:44:37.639816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.240 qpair failed and we were unable to recover it. 00:34:53.240 [2024-07-14 09:44:37.649531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.240 [2024-07-14 09:44:37.649698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.240 [2024-07-14 09:44:37.649729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.240 [2024-07-14 09:44:37.649745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.240 [2024-07-14 09:44:37.649757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.240 [2024-07-14 09:44:37.649789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.240 qpair failed and we were unable to recover it. 00:34:53.240 [2024-07-14 09:44:37.659568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.240 [2024-07-14 09:44:37.659735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.240 [2024-07-14 09:44:37.659761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.240 [2024-07-14 09:44:37.659775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.240 [2024-07-14 09:44:37.659787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.240 [2024-07-14 09:44:37.659817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.240 qpair failed and we were unable to recover it. 00:34:53.240 [2024-07-14 09:44:37.669589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.240 [2024-07-14 09:44:37.669749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.240 [2024-07-14 09:44:37.669781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.240 [2024-07-14 09:44:37.669796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.240 [2024-07-14 09:44:37.669808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.240 [2024-07-14 09:44:37.669838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.240 qpair failed and we were unable to recover it. 00:34:53.240 [2024-07-14 09:44:37.679646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.240 [2024-07-14 09:44:37.679820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.240 [2024-07-14 09:44:37.679846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.240 [2024-07-14 09:44:37.679876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.240 [2024-07-14 09:44:37.679890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.240 [2024-07-14 09:44:37.679935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.240 qpair failed and we were unable to recover it. 00:34:53.240 [2024-07-14 09:44:37.689656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.240 [2024-07-14 09:44:37.689856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.240 [2024-07-14 09:44:37.689891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.240 [2024-07-14 09:44:37.689906] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.240 [2024-07-14 09:44:37.689925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.240 [2024-07-14 09:44:37.689966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.240 qpair failed and we were unable to recover it. 00:34:53.500 [2024-07-14 09:44:37.699671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.500 [2024-07-14 09:44:37.699837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.500 [2024-07-14 09:44:37.699863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.500 [2024-07-14 09:44:37.699886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.500 [2024-07-14 09:44:37.699899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.500 [2024-07-14 09:44:37.699928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.500 qpair failed and we were unable to recover it. 00:34:53.500 [2024-07-14 09:44:37.709712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.500 [2024-07-14 09:44:37.709910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.500 [2024-07-14 09:44:37.709936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.500 [2024-07-14 09:44:37.709950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.500 [2024-07-14 09:44:37.709962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.500 [2024-07-14 09:44:37.709992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.500 qpair failed and we were unable to recover it. 00:34:53.500 [2024-07-14 09:44:37.719720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.500 [2024-07-14 09:44:37.719891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.500 [2024-07-14 09:44:37.719917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.500 [2024-07-14 09:44:37.719931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.500 [2024-07-14 09:44:37.719943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.500 [2024-07-14 09:44:37.719972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.500 qpair failed and we were unable to recover it. 00:34:53.500 [2024-07-14 09:44:37.729755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.500 [2024-07-14 09:44:37.729974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.500 [2024-07-14 09:44:37.729999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.500 [2024-07-14 09:44:37.730014] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.500 [2024-07-14 09:44:37.730026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.500 [2024-07-14 09:44:37.730056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.500 qpair failed and we were unable to recover it. 00:34:53.500 [2024-07-14 09:44:37.739785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.500 [2024-07-14 09:44:37.739961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.500 [2024-07-14 09:44:37.739987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.500 [2024-07-14 09:44:37.740001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.500 [2024-07-14 09:44:37.740013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.500 [2024-07-14 09:44:37.740043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.500 qpair failed and we were unable to recover it. 00:34:53.500 [2024-07-14 09:44:37.749802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.500 [2024-07-14 09:44:37.749996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.500 [2024-07-14 09:44:37.750022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.500 [2024-07-14 09:44:37.750037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.500 [2024-07-14 09:44:37.750049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.500 [2024-07-14 09:44:37.750078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.500 qpair failed and we were unable to recover it. 00:34:53.500 [2024-07-14 09:44:37.759848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.500 [2024-07-14 09:44:37.760036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.500 [2024-07-14 09:44:37.760061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.500 [2024-07-14 09:44:37.760076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.500 [2024-07-14 09:44:37.760088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.500 [2024-07-14 09:44:37.760121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.500 qpair failed and we were unable to recover it. 00:34:53.500 [2024-07-14 09:44:37.769852] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.500 [2024-07-14 09:44:37.770036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.500 [2024-07-14 09:44:37.770061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.500 [2024-07-14 09:44:37.770076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.500 [2024-07-14 09:44:37.770088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.500 [2024-07-14 09:44:37.770118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.500 qpair failed and we were unable to recover it. 00:34:53.500 [2024-07-14 09:44:37.779905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.500 [2024-07-14 09:44:37.780092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.500 [2024-07-14 09:44:37.780118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.500 [2024-07-14 09:44:37.780133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.500 [2024-07-14 09:44:37.780150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.500 [2024-07-14 09:44:37.780195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.500 qpair failed and we were unable to recover it. 00:34:53.500 [2024-07-14 09:44:37.789943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.500 [2024-07-14 09:44:37.790099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.500 [2024-07-14 09:44:37.790124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.500 [2024-07-14 09:44:37.790139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.500 [2024-07-14 09:44:37.790151] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.500 [2024-07-14 09:44:37.790180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.500 qpair failed and we were unable to recover it. 00:34:53.500 [2024-07-14 09:44:37.799944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.500 [2024-07-14 09:44:37.800104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.500 [2024-07-14 09:44:37.800130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.500 [2024-07-14 09:44:37.800144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.500 [2024-07-14 09:44:37.800156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.500 [2024-07-14 09:44:37.800185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.500 qpair failed and we were unable to recover it. 00:34:53.500 [2024-07-14 09:44:37.809992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.500 [2024-07-14 09:44:37.810157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.500 [2024-07-14 09:44:37.810183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.500 [2024-07-14 09:44:37.810212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.500 [2024-07-14 09:44:37.810224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.500 [2024-07-14 09:44:37.810255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.500 qpair failed and we were unable to recover it. 00:34:53.500 [2024-07-14 09:44:37.820058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.500 [2024-07-14 09:44:37.820282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.500 [2024-07-14 09:44:37.820309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.501 [2024-07-14 09:44:37.820324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.501 [2024-07-14 09:44:37.820336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.501 [2024-07-14 09:44:37.820380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.501 qpair failed and we were unable to recover it. 00:34:53.501 [2024-07-14 09:44:37.830045] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.501 [2024-07-14 09:44:37.830211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.501 [2024-07-14 09:44:37.830238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.501 [2024-07-14 09:44:37.830271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.501 [2024-07-14 09:44:37.830284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.501 [2024-07-14 09:44:37.830314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.501 qpair failed and we were unable to recover it. 00:34:53.501 [2024-07-14 09:44:37.840077] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.501 [2024-07-14 09:44:37.840276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.501 [2024-07-14 09:44:37.840317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.501 [2024-07-14 09:44:37.840335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.501 [2024-07-14 09:44:37.840349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.501 [2024-07-14 09:44:37.840393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.501 qpair failed and we were unable to recover it. 00:34:53.501 [2024-07-14 09:44:37.850117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.501 [2024-07-14 09:44:37.850282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.501 [2024-07-14 09:44:37.850308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.501 [2024-07-14 09:44:37.850322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.501 [2024-07-14 09:44:37.850335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.501 [2024-07-14 09:44:37.850367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.501 qpair failed and we were unable to recover it. 00:34:53.501 [2024-07-14 09:44:37.860115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.501 [2024-07-14 09:44:37.860327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.501 [2024-07-14 09:44:37.860352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.501 [2024-07-14 09:44:37.860366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.501 [2024-07-14 09:44:37.860378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.501 [2024-07-14 09:44:37.860407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.501 qpair failed and we were unable to recover it. 00:34:53.501 [2024-07-14 09:44:37.870143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.501 [2024-07-14 09:44:37.870308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.501 [2024-07-14 09:44:37.870335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.501 [2024-07-14 09:44:37.870357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.501 [2024-07-14 09:44:37.870386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.501 [2024-07-14 09:44:37.870415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.501 qpair failed and we were unable to recover it. 00:34:53.501 [2024-07-14 09:44:37.880179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.501 [2024-07-14 09:44:37.880348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.501 [2024-07-14 09:44:37.880374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.501 [2024-07-14 09:44:37.880403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.501 [2024-07-14 09:44:37.880415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.501 [2024-07-14 09:44:37.880473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.501 qpair failed and we were unable to recover it. 00:34:53.501 [2024-07-14 09:44:37.890231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.501 [2024-07-14 09:44:37.890434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.501 [2024-07-14 09:44:37.890458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.501 [2024-07-14 09:44:37.890472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.501 [2024-07-14 09:44:37.890484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.501 [2024-07-14 09:44:37.890530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.501 qpair failed and we were unable to recover it. 00:34:53.501 [2024-07-14 09:44:37.900253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.501 [2024-07-14 09:44:37.900424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.501 [2024-07-14 09:44:37.900450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.501 [2024-07-14 09:44:37.900480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.501 [2024-07-14 09:44:37.900492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.501 [2024-07-14 09:44:37.900521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.501 qpair failed and we were unable to recover it. 00:34:53.501 [2024-07-14 09:44:37.910243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.501 [2024-07-14 09:44:37.910407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.501 [2024-07-14 09:44:37.910432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.501 [2024-07-14 09:44:37.910447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.501 [2024-07-14 09:44:37.910459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.501 [2024-07-14 09:44:37.910488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.501 qpair failed and we were unable to recover it. 00:34:53.501 [2024-07-14 09:44:37.920306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.501 [2024-07-14 09:44:37.920464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.501 [2024-07-14 09:44:37.920490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.501 [2024-07-14 09:44:37.920504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.501 [2024-07-14 09:44:37.920516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.501 [2024-07-14 09:44:37.920547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.501 qpair failed and we were unable to recover it. 00:34:53.501 [2024-07-14 09:44:37.930289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.501 [2024-07-14 09:44:37.930454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.501 [2024-07-14 09:44:37.930480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.501 [2024-07-14 09:44:37.930494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.501 [2024-07-14 09:44:37.930506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.501 [2024-07-14 09:44:37.930535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.501 qpair failed and we were unable to recover it. 00:34:53.501 [2024-07-14 09:44:37.940355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.501 [2024-07-14 09:44:37.940525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.501 [2024-07-14 09:44:37.940550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.501 [2024-07-14 09:44:37.940565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.501 [2024-07-14 09:44:37.940577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.501 [2024-07-14 09:44:37.940606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.501 qpair failed and we were unable to recover it. 00:34:53.501 [2024-07-14 09:44:37.950380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.501 [2024-07-14 09:44:37.950540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.501 [2024-07-14 09:44:37.950565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.501 [2024-07-14 09:44:37.950580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.501 [2024-07-14 09:44:37.950592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.501 [2024-07-14 09:44:37.950621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.501 qpair failed and we were unable to recover it. 00:34:53.760 [2024-07-14 09:44:37.960358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.760 [2024-07-14 09:44:37.960517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.760 [2024-07-14 09:44:37.960547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.760 [2024-07-14 09:44:37.960562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.761 [2024-07-14 09:44:37.960574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.761 [2024-07-14 09:44:37.960606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.761 qpair failed and we were unable to recover it. 00:34:53.761 [2024-07-14 09:44:37.970424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.761 [2024-07-14 09:44:37.970592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.761 [2024-07-14 09:44:37.970618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.761 [2024-07-14 09:44:37.970632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.761 [2024-07-14 09:44:37.970644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.761 [2024-07-14 09:44:37.970674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.761 qpair failed and we were unable to recover it. 00:34:53.761 [2024-07-14 09:44:37.980427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.761 [2024-07-14 09:44:37.980597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.761 [2024-07-14 09:44:37.980622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.761 [2024-07-14 09:44:37.980636] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.761 [2024-07-14 09:44:37.980648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.761 [2024-07-14 09:44:37.980677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.761 qpair failed and we were unable to recover it. 00:34:53.761 [2024-07-14 09:44:37.990444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.761 [2024-07-14 09:44:37.990610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.761 [2024-07-14 09:44:37.990635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.761 [2024-07-14 09:44:37.990650] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.761 [2024-07-14 09:44:37.990662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.761 [2024-07-14 09:44:37.990694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.761 qpair failed and we were unable to recover it. 00:34:53.761 [2024-07-14 09:44:38.000517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.761 [2024-07-14 09:44:38.000717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.761 [2024-07-14 09:44:38.000743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.761 [2024-07-14 09:44:38.000757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.761 [2024-07-14 09:44:38.000769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.761 [2024-07-14 09:44:38.000805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.761 qpair failed and we were unable to recover it. 00:34:53.761 [2024-07-14 09:44:38.010548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.761 [2024-07-14 09:44:38.010714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.761 [2024-07-14 09:44:38.010740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.761 [2024-07-14 09:44:38.010754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.761 [2024-07-14 09:44:38.010782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.761 [2024-07-14 09:44:38.010814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.761 qpair failed and we were unable to recover it. 00:34:53.761 [2024-07-14 09:44:38.020586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.761 [2024-07-14 09:44:38.020758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.761 [2024-07-14 09:44:38.020783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.761 [2024-07-14 09:44:38.020797] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.761 [2024-07-14 09:44:38.020824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.761 [2024-07-14 09:44:38.020853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.761 qpair failed and we were unable to recover it. 00:34:53.761 [2024-07-14 09:44:38.030638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.761 [2024-07-14 09:44:38.030850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.761 [2024-07-14 09:44:38.030888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.761 [2024-07-14 09:44:38.030905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.761 [2024-07-14 09:44:38.030918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.761 [2024-07-14 09:44:38.030948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.761 qpair failed and we were unable to recover it. 00:34:53.761 [2024-07-14 09:44:38.040611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.761 [2024-07-14 09:44:38.040767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.761 [2024-07-14 09:44:38.040792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.761 [2024-07-14 09:44:38.040807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.761 [2024-07-14 09:44:38.040819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.761 [2024-07-14 09:44:38.040848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.761 qpair failed and we were unable to recover it. 00:34:53.761 [2024-07-14 09:44:38.050669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.761 [2024-07-14 09:44:38.050862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.761 [2024-07-14 09:44:38.050902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.761 [2024-07-14 09:44:38.050918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.761 [2024-07-14 09:44:38.050930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.761 [2024-07-14 09:44:38.050960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.761 qpair failed and we were unable to recover it. 00:34:53.761 [2024-07-14 09:44:38.060677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.761 [2024-07-14 09:44:38.060847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.761 [2024-07-14 09:44:38.060880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.761 [2024-07-14 09:44:38.060896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.761 [2024-07-14 09:44:38.060908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.761 [2024-07-14 09:44:38.060937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.761 qpair failed and we were unable to recover it. 00:34:53.761 [2024-07-14 09:44:38.070685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.761 [2024-07-14 09:44:38.070840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.761 [2024-07-14 09:44:38.070871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.761 [2024-07-14 09:44:38.070887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.761 [2024-07-14 09:44:38.070902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.761 [2024-07-14 09:44:38.070932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.761 qpair failed and we were unable to recover it. 00:34:53.761 [2024-07-14 09:44:38.080711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.761 [2024-07-14 09:44:38.080862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.761 [2024-07-14 09:44:38.080895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.761 [2024-07-14 09:44:38.080910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.761 [2024-07-14 09:44:38.080922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.761 [2024-07-14 09:44:38.080952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.761 qpair failed and we were unable to recover it. 00:34:53.761 [2024-07-14 09:44:38.090765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.761 [2024-07-14 09:44:38.090927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.761 [2024-07-14 09:44:38.090953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.761 [2024-07-14 09:44:38.090967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.761 [2024-07-14 09:44:38.090988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.761 [2024-07-14 09:44:38.091018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.761 qpair failed and we were unable to recover it. 00:34:53.761 [2024-07-14 09:44:38.100807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.761 [2024-07-14 09:44:38.101025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.761 [2024-07-14 09:44:38.101051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.761 [2024-07-14 09:44:38.101065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.761 [2024-07-14 09:44:38.101077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.761 [2024-07-14 09:44:38.101107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.762 qpair failed and we were unable to recover it. 00:34:53.762 [2024-07-14 09:44:38.110815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.762 [2024-07-14 09:44:38.110988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.762 [2024-07-14 09:44:38.111014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.762 [2024-07-14 09:44:38.111028] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.762 [2024-07-14 09:44:38.111040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.762 [2024-07-14 09:44:38.111070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.762 qpair failed and we were unable to recover it. 00:34:53.762 [2024-07-14 09:44:38.120876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.762 [2024-07-14 09:44:38.121042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.762 [2024-07-14 09:44:38.121068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.762 [2024-07-14 09:44:38.121083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.762 [2024-07-14 09:44:38.121096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.762 [2024-07-14 09:44:38.121125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.762 qpair failed and we were unable to recover it. 00:34:53.762 [2024-07-14 09:44:38.130887] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.762 [2024-07-14 09:44:38.131051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.762 [2024-07-14 09:44:38.131077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.762 [2024-07-14 09:44:38.131091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.762 [2024-07-14 09:44:38.131104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.762 [2024-07-14 09:44:38.131133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.762 qpair failed and we were unable to recover it. 00:34:53.762 [2024-07-14 09:44:38.140928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.762 [2024-07-14 09:44:38.141135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.762 [2024-07-14 09:44:38.141175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.762 [2024-07-14 09:44:38.141190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.762 [2024-07-14 09:44:38.141202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.762 [2024-07-14 09:44:38.141245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.762 qpair failed and we were unable to recover it. 00:34:53.762 [2024-07-14 09:44:38.150983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.762 [2024-07-14 09:44:38.151147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.762 [2024-07-14 09:44:38.151173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.762 [2024-07-14 09:44:38.151187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.762 [2024-07-14 09:44:38.151199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.762 [2024-07-14 09:44:38.151243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.762 qpair failed and we were unable to recover it. 00:34:53.762 [2024-07-14 09:44:38.160964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.762 [2024-07-14 09:44:38.161125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.762 [2024-07-14 09:44:38.161151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.762 [2024-07-14 09:44:38.161165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.762 [2024-07-14 09:44:38.161178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.762 [2024-07-14 09:44:38.161219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.762 qpair failed and we were unable to recover it. 00:34:53.762 [2024-07-14 09:44:38.170993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.762 [2024-07-14 09:44:38.171153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.762 [2024-07-14 09:44:38.171178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.762 [2024-07-14 09:44:38.171193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.762 [2024-07-14 09:44:38.171205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.762 [2024-07-14 09:44:38.171234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.762 qpair failed and we were unable to recover it. 00:34:53.762 [2024-07-14 09:44:38.181041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.762 [2024-07-14 09:44:38.181212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.762 [2024-07-14 09:44:38.181238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.762 [2024-07-14 09:44:38.181266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.762 [2024-07-14 09:44:38.181283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.762 [2024-07-14 09:44:38.181327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.762 qpair failed and we were unable to recover it. 00:34:53.762 [2024-07-14 09:44:38.191067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.762 [2024-07-14 09:44:38.191225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.762 [2024-07-14 09:44:38.191251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.762 [2024-07-14 09:44:38.191266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.762 [2024-07-14 09:44:38.191278] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.762 [2024-07-14 09:44:38.191323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.762 qpair failed and we were unable to recover it. 00:34:53.762 [2024-07-14 09:44:38.201097] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.762 [2024-07-14 09:44:38.201262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.762 [2024-07-14 09:44:38.201288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.762 [2024-07-14 09:44:38.201302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.762 [2024-07-14 09:44:38.201314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.762 [2024-07-14 09:44:38.201343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.762 qpair failed and we were unable to recover it. 00:34:53.762 [2024-07-14 09:44:38.211108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:53.762 [2024-07-14 09:44:38.211269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:53.762 [2024-07-14 09:44:38.211295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:53.762 [2024-07-14 09:44:38.211309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:53.762 [2024-07-14 09:44:38.211321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:53.762 [2024-07-14 09:44:38.211350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:53.762 qpair failed and we were unable to recover it. 00:34:54.021 [2024-07-14 09:44:38.221160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.021 [2024-07-14 09:44:38.221329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.021 [2024-07-14 09:44:38.221355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.021 [2024-07-14 09:44:38.221369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.021 [2024-07-14 09:44:38.221396] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.021 [2024-07-14 09:44:38.221425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.021 qpair failed and we were unable to recover it. 00:34:54.021 [2024-07-14 09:44:38.231155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.021 [2024-07-14 09:44:38.231318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.021 [2024-07-14 09:44:38.231344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.021 [2024-07-14 09:44:38.231358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.021 [2024-07-14 09:44:38.231370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.021 [2024-07-14 09:44:38.231400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.021 qpair failed and we were unable to recover it. 00:34:54.021 [2024-07-14 09:44:38.241212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.021 [2024-07-14 09:44:38.241374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.021 [2024-07-14 09:44:38.241400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.021 [2024-07-14 09:44:38.241414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.021 [2024-07-14 09:44:38.241441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.021 [2024-07-14 09:44:38.241472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.021 qpair failed and we were unable to recover it. 00:34:54.021 [2024-07-14 09:44:38.251254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.021 [2024-07-14 09:44:38.251439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.021 [2024-07-14 09:44:38.251464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.021 [2024-07-14 09:44:38.251493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.021 [2024-07-14 09:44:38.251505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.021 [2024-07-14 09:44:38.251551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.021 qpair failed and we were unable to recover it. 00:34:54.021 [2024-07-14 09:44:38.261320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.021 [2024-07-14 09:44:38.261489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.021 [2024-07-14 09:44:38.261516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.021 [2024-07-14 09:44:38.261534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.021 [2024-07-14 09:44:38.261562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.021 [2024-07-14 09:44:38.261592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.021 qpair failed and we were unable to recover it. 00:34:54.021 [2024-07-14 09:44:38.271264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.021 [2024-07-14 09:44:38.271421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.021 [2024-07-14 09:44:38.271447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.021 [2024-07-14 09:44:38.271467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.021 [2024-07-14 09:44:38.271480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.021 [2024-07-14 09:44:38.271510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.021 qpair failed and we were unable to recover it. 00:34:54.021 [2024-07-14 09:44:38.281347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.021 [2024-07-14 09:44:38.281543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.021 [2024-07-14 09:44:38.281584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.021 [2024-07-14 09:44:38.281599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.022 [2024-07-14 09:44:38.281611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.022 [2024-07-14 09:44:38.281657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.022 qpair failed and we were unable to recover it. 00:34:54.022 [2024-07-14 09:44:38.291347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.022 [2024-07-14 09:44:38.291514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.022 [2024-07-14 09:44:38.291540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.022 [2024-07-14 09:44:38.291554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.022 [2024-07-14 09:44:38.291566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.022 [2024-07-14 09:44:38.291596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.022 qpair failed and we were unable to recover it. 00:34:54.022 [2024-07-14 09:44:38.301398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.022 [2024-07-14 09:44:38.301579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.022 [2024-07-14 09:44:38.301605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.022 [2024-07-14 09:44:38.301619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.022 [2024-07-14 09:44:38.301647] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.022 [2024-07-14 09:44:38.301676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.022 qpair failed and we were unable to recover it. 00:34:54.022 [2024-07-14 09:44:38.311397] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.022 [2024-07-14 09:44:38.311558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.022 [2024-07-14 09:44:38.311585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.022 [2024-07-14 09:44:38.311603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.022 [2024-07-14 09:44:38.311617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.022 [2024-07-14 09:44:38.311661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.022 qpair failed and we were unable to recover it. 00:34:54.022 [2024-07-14 09:44:38.321438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.022 [2024-07-14 09:44:38.321626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.022 [2024-07-14 09:44:38.321667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.022 [2024-07-14 09:44:38.321683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.022 [2024-07-14 09:44:38.321695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.022 [2024-07-14 09:44:38.321739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.022 qpair failed and we were unable to recover it. 00:34:54.022 [2024-07-14 09:44:38.331478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.022 [2024-07-14 09:44:38.331676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.022 [2024-07-14 09:44:38.331717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.022 [2024-07-14 09:44:38.331731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.022 [2024-07-14 09:44:38.331743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.022 [2024-07-14 09:44:38.331791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.022 qpair failed and we were unable to recover it. 00:34:54.022 [2024-07-14 09:44:38.341493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.022 [2024-07-14 09:44:38.341671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.022 [2024-07-14 09:44:38.341699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.022 [2024-07-14 09:44:38.341731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.022 [2024-07-14 09:44:38.341743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.022 [2024-07-14 09:44:38.341787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.022 qpair failed and we were unable to recover it. 00:34:54.022 [2024-07-14 09:44:38.351522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.022 [2024-07-14 09:44:38.351686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.022 [2024-07-14 09:44:38.351712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.022 [2024-07-14 09:44:38.351727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.022 [2024-07-14 09:44:38.351739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.022 [2024-07-14 09:44:38.351785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.022 qpair failed and we were unable to recover it. 00:34:54.022 [2024-07-14 09:44:38.361529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.022 [2024-07-14 09:44:38.361691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.022 [2024-07-14 09:44:38.361721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.022 [2024-07-14 09:44:38.361736] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.022 [2024-07-14 09:44:38.361749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.022 [2024-07-14 09:44:38.361781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.022 qpair failed and we were unable to recover it. 00:34:54.022 [2024-07-14 09:44:38.371579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.022 [2024-07-14 09:44:38.371746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.022 [2024-07-14 09:44:38.371771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.022 [2024-07-14 09:44:38.371786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.022 [2024-07-14 09:44:38.371798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.022 [2024-07-14 09:44:38.371827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.022 qpair failed and we were unable to recover it. 00:34:54.022 [2024-07-14 09:44:38.381618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.022 [2024-07-14 09:44:38.381794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.022 [2024-07-14 09:44:38.381820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.022 [2024-07-14 09:44:38.381852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.022 [2024-07-14 09:44:38.381864] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.022 [2024-07-14 09:44:38.381917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.022 qpair failed and we were unable to recover it. 00:34:54.022 [2024-07-14 09:44:38.391604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.022 [2024-07-14 09:44:38.391756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.022 [2024-07-14 09:44:38.391782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.022 [2024-07-14 09:44:38.391796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.022 [2024-07-14 09:44:38.391809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.022 [2024-07-14 09:44:38.391838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.022 qpair failed and we were unable to recover it. 00:34:54.022 [2024-07-14 09:44:38.401634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.022 [2024-07-14 09:44:38.401791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.022 [2024-07-14 09:44:38.401817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.022 [2024-07-14 09:44:38.401831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.022 [2024-07-14 09:44:38.401843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.022 [2024-07-14 09:44:38.401888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.022 qpair failed and we were unable to recover it. 00:34:54.022 [2024-07-14 09:44:38.411671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.022 [2024-07-14 09:44:38.411839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.022 [2024-07-14 09:44:38.411875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.022 [2024-07-14 09:44:38.411893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.022 [2024-07-14 09:44:38.411905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.022 [2024-07-14 09:44:38.411935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.022 qpair failed and we were unable to recover it. 00:34:54.022 [2024-07-14 09:44:38.421698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.022 [2024-07-14 09:44:38.421912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.022 [2024-07-14 09:44:38.421939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.022 [2024-07-14 09:44:38.421954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.022 [2024-07-14 09:44:38.421967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.022 [2024-07-14 09:44:38.421998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.022 qpair failed and we were unable to recover it. 00:34:54.023 [2024-07-14 09:44:38.431720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.023 [2024-07-14 09:44:38.431888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.023 [2024-07-14 09:44:38.431913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.023 [2024-07-14 09:44:38.431927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.023 [2024-07-14 09:44:38.431938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.023 [2024-07-14 09:44:38.431968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.023 qpair failed and we were unable to recover it. 00:34:54.023 [2024-07-14 09:44:38.441757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.023 [2024-07-14 09:44:38.441922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.023 [2024-07-14 09:44:38.441948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.023 [2024-07-14 09:44:38.441963] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.023 [2024-07-14 09:44:38.441975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.023 [2024-07-14 09:44:38.442004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.023 qpair failed and we were unable to recover it. 00:34:54.023 [2024-07-14 09:44:38.451788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.023 [2024-07-14 09:44:38.451961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.023 [2024-07-14 09:44:38.451992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.023 [2024-07-14 09:44:38.452008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.023 [2024-07-14 09:44:38.452020] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.023 [2024-07-14 09:44:38.452049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.023 qpair failed and we were unable to recover it. 00:34:54.023 [2024-07-14 09:44:38.461907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.023 [2024-07-14 09:44:38.462085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.023 [2024-07-14 09:44:38.462111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.023 [2024-07-14 09:44:38.462125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.023 [2024-07-14 09:44:38.462138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.023 [2024-07-14 09:44:38.462167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.023 qpair failed and we were unable to recover it. 00:34:54.023 [2024-07-14 09:44:38.471833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.023 [2024-07-14 09:44:38.472008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.023 [2024-07-14 09:44:38.472034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.023 [2024-07-14 09:44:38.472048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.023 [2024-07-14 09:44:38.472060] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.023 [2024-07-14 09:44:38.472089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.023 qpair failed and we were unable to recover it. 00:34:54.282 [2024-07-14 09:44:38.481859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.282 [2024-07-14 09:44:38.482040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.282 [2024-07-14 09:44:38.482066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.282 [2024-07-14 09:44:38.482080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.282 [2024-07-14 09:44:38.482092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.282 [2024-07-14 09:44:38.482122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.282 qpair failed and we were unable to recover it. 00:34:54.282 [2024-07-14 09:44:38.491916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.282 [2024-07-14 09:44:38.492085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.282 [2024-07-14 09:44:38.492111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.282 [2024-07-14 09:44:38.492125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.282 [2024-07-14 09:44:38.492137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.282 [2024-07-14 09:44:38.492189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.282 qpair failed and we were unable to recover it. 00:34:54.282 [2024-07-14 09:44:38.501944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.282 [2024-07-14 09:44:38.502162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.282 [2024-07-14 09:44:38.502187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.282 [2024-07-14 09:44:38.502202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.282 [2024-07-14 09:44:38.502214] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.282 [2024-07-14 09:44:38.502243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.282 qpair failed and we were unable to recover it. 00:34:54.282 [2024-07-14 09:44:38.511954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.282 [2024-07-14 09:44:38.512115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.282 [2024-07-14 09:44:38.512140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.282 [2024-07-14 09:44:38.512155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.282 [2024-07-14 09:44:38.512167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.282 [2024-07-14 09:44:38.512199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.282 qpair failed and we were unable to recover it. 00:34:54.282 [2024-07-14 09:44:38.521998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.282 [2024-07-14 09:44:38.522156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.282 [2024-07-14 09:44:38.522181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.282 [2024-07-14 09:44:38.522199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.282 [2024-07-14 09:44:38.522212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.282 [2024-07-14 09:44:38.522256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.282 qpair failed and we were unable to recover it. 00:34:54.282 [2024-07-14 09:44:38.532065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.282 [2024-07-14 09:44:38.532254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.282 [2024-07-14 09:44:38.532294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.282 [2024-07-14 09:44:38.532308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.282 [2024-07-14 09:44:38.532320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.282 [2024-07-14 09:44:38.532364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.282 qpair failed and we were unable to recover it. 00:34:54.282 [2024-07-14 09:44:38.542063] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.282 [2024-07-14 09:44:38.542237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.282 [2024-07-14 09:44:38.542263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.282 [2024-07-14 09:44:38.542277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.282 [2024-07-14 09:44:38.542289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.282 [2024-07-14 09:44:38.542318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.282 qpair failed and we were unable to recover it. 00:34:54.282 [2024-07-14 09:44:38.552114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.282 [2024-07-14 09:44:38.552317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.282 [2024-07-14 09:44:38.552357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.282 [2024-07-14 09:44:38.552371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.282 [2024-07-14 09:44:38.552383] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.282 [2024-07-14 09:44:38.552426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.282 qpair failed and we were unable to recover it. 00:34:54.283 [2024-07-14 09:44:38.562102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.283 [2024-07-14 09:44:38.562273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.283 [2024-07-14 09:44:38.562299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.283 [2024-07-14 09:44:38.562313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.283 [2024-07-14 09:44:38.562326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.283 [2024-07-14 09:44:38.562354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.283 qpair failed and we were unable to recover it. 00:34:54.283 [2024-07-14 09:44:38.572176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.283 [2024-07-14 09:44:38.572342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.283 [2024-07-14 09:44:38.572367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.283 [2024-07-14 09:44:38.572382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.283 [2024-07-14 09:44:38.572394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.283 [2024-07-14 09:44:38.572423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.283 qpair failed and we were unable to recover it. 00:34:54.283 [2024-07-14 09:44:38.582147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.283 [2024-07-14 09:44:38.582331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.283 [2024-07-14 09:44:38.582357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.283 [2024-07-14 09:44:38.582371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.283 [2024-07-14 09:44:38.582389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.283 [2024-07-14 09:44:38.582419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.283 qpair failed and we were unable to recover it. 00:34:54.283 [2024-07-14 09:44:38.592256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.283 [2024-07-14 09:44:38.592419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.283 [2024-07-14 09:44:38.592446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.283 [2024-07-14 09:44:38.592460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.283 [2024-07-14 09:44:38.592472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.283 [2024-07-14 09:44:38.592501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.283 qpair failed and we were unable to recover it. 00:34:54.283 [2024-07-14 09:44:38.602233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.283 [2024-07-14 09:44:38.602423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.283 [2024-07-14 09:44:38.602448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.283 [2024-07-14 09:44:38.602462] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.283 [2024-07-14 09:44:38.602474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.283 [2024-07-14 09:44:38.602503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.283 qpair failed and we were unable to recover it. 00:34:54.283 [2024-07-14 09:44:38.612283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.283 [2024-07-14 09:44:38.612447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.283 [2024-07-14 09:44:38.612472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.283 [2024-07-14 09:44:38.612486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.283 [2024-07-14 09:44:38.612498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.283 [2024-07-14 09:44:38.612552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.283 qpair failed and we were unable to recover it. 00:34:54.283 [2024-07-14 09:44:38.622322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.283 [2024-07-14 09:44:38.622490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.283 [2024-07-14 09:44:38.622515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.283 [2024-07-14 09:44:38.622530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.283 [2024-07-14 09:44:38.622542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.283 [2024-07-14 09:44:38.622586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.283 qpair failed and we were unable to recover it. 00:34:54.283 [2024-07-14 09:44:38.632313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.283 [2024-07-14 09:44:38.632470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.283 [2024-07-14 09:44:38.632496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.283 [2024-07-14 09:44:38.632510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.283 [2024-07-14 09:44:38.632522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.283 [2024-07-14 09:44:38.632551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.283 qpair failed and we were unable to recover it. 00:34:54.283 [2024-07-14 09:44:38.642366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.283 [2024-07-14 09:44:38.642524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.283 [2024-07-14 09:44:38.642549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.283 [2024-07-14 09:44:38.642563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.283 [2024-07-14 09:44:38.642575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.283 [2024-07-14 09:44:38.642619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.283 qpair failed and we were unable to recover it. 00:34:54.283 [2024-07-14 09:44:38.652392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.283 [2024-07-14 09:44:38.652566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.283 [2024-07-14 09:44:38.652591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.283 [2024-07-14 09:44:38.652606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.283 [2024-07-14 09:44:38.652618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.283 [2024-07-14 09:44:38.652648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.283 qpair failed and we were unable to recover it. 00:34:54.283 [2024-07-14 09:44:38.662391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.283 [2024-07-14 09:44:38.662559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.283 [2024-07-14 09:44:38.662584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.283 [2024-07-14 09:44:38.662598] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.283 [2024-07-14 09:44:38.662610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.283 [2024-07-14 09:44:38.662640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.283 qpair failed and we were unable to recover it. 00:34:54.283 [2024-07-14 09:44:38.672443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.283 [2024-07-14 09:44:38.672599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.283 [2024-07-14 09:44:38.672623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.283 [2024-07-14 09:44:38.672646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.283 [2024-07-14 09:44:38.672659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.283 [2024-07-14 09:44:38.672692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.283 qpair failed and we were unable to recover it. 00:34:54.283 [2024-07-14 09:44:38.682428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.283 [2024-07-14 09:44:38.682594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.283 [2024-07-14 09:44:38.682619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.283 [2024-07-14 09:44:38.682635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.283 [2024-07-14 09:44:38.682647] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.283 [2024-07-14 09:44:38.682681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.283 qpair failed and we were unable to recover it. 00:34:54.283 [2024-07-14 09:44:38.692516] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.283 [2024-07-14 09:44:38.692703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.283 [2024-07-14 09:44:38.692744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.283 [2024-07-14 09:44:38.692758] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.283 [2024-07-14 09:44:38.692770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.283 [2024-07-14 09:44:38.692813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.283 qpair failed and we were unable to recover it. 00:34:54.283 [2024-07-14 09:44:38.702504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.283 [2024-07-14 09:44:38.702707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.284 [2024-07-14 09:44:38.702732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.284 [2024-07-14 09:44:38.702746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.284 [2024-07-14 09:44:38.702758] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.284 [2024-07-14 09:44:38.702788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.284 qpair failed and we were unable to recover it. 00:34:54.284 [2024-07-14 09:44:38.712533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.284 [2024-07-14 09:44:38.712705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.284 [2024-07-14 09:44:38.712732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.284 [2024-07-14 09:44:38.712749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.284 [2024-07-14 09:44:38.712779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.284 [2024-07-14 09:44:38.712808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.284 qpair failed and we were unable to recover it. 00:34:54.284 [2024-07-14 09:44:38.722581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.284 [2024-07-14 09:44:38.722742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.284 [2024-07-14 09:44:38.722767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.284 [2024-07-14 09:44:38.722782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.284 [2024-07-14 09:44:38.722794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.284 [2024-07-14 09:44:38.722823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.284 qpair failed and we were unable to recover it. 00:34:54.284 [2024-07-14 09:44:38.732685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.284 [2024-07-14 09:44:38.732857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.284 [2024-07-14 09:44:38.732892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.284 [2024-07-14 09:44:38.732915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.284 [2024-07-14 09:44:38.732927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.284 [2024-07-14 09:44:38.732957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.284 qpair failed and we were unable to recover it. 00:34:54.543 [2024-07-14 09:44:38.742616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.543 [2024-07-14 09:44:38.742789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.543 [2024-07-14 09:44:38.742815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.543 [2024-07-14 09:44:38.742829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.543 [2024-07-14 09:44:38.742841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.543 [2024-07-14 09:44:38.742878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.543 qpair failed and we were unable to recover it. 00:34:54.543 [2024-07-14 09:44:38.752704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.543 [2024-07-14 09:44:38.752905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.543 [2024-07-14 09:44:38.752931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.543 [2024-07-14 09:44:38.752946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.543 [2024-07-14 09:44:38.752958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.543 [2024-07-14 09:44:38.752988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.543 qpair failed and we were unable to recover it. 00:34:54.543 [2024-07-14 09:44:38.762664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.543 [2024-07-14 09:44:38.762826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.543 [2024-07-14 09:44:38.762851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.543 [2024-07-14 09:44:38.762882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.543 [2024-07-14 09:44:38.762897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.543 [2024-07-14 09:44:38.762928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.543 qpair failed and we were unable to recover it. 00:34:54.543 [2024-07-14 09:44:38.772705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.543 [2024-07-14 09:44:38.772910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.543 [2024-07-14 09:44:38.772936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.543 [2024-07-14 09:44:38.772950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.543 [2024-07-14 09:44:38.772963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.543 [2024-07-14 09:44:38.772992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.543 qpair failed and we were unable to recover it. 00:34:54.543 [2024-07-14 09:44:38.782721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.543 [2024-07-14 09:44:38.782895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.543 [2024-07-14 09:44:38.782921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.543 [2024-07-14 09:44:38.782936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.543 [2024-07-14 09:44:38.782948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.543 [2024-07-14 09:44:38.782977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.543 qpair failed and we were unable to recover it. 00:34:54.543 [2024-07-14 09:44:38.792739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.543 [2024-07-14 09:44:38.792902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.543 [2024-07-14 09:44:38.792927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.543 [2024-07-14 09:44:38.792941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.543 [2024-07-14 09:44:38.792954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.543 [2024-07-14 09:44:38.792983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.543 qpair failed and we were unable to recover it. 00:34:54.543 [2024-07-14 09:44:38.802816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.543 [2024-07-14 09:44:38.802975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.543 [2024-07-14 09:44:38.803000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.543 [2024-07-14 09:44:38.803015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.543 [2024-07-14 09:44:38.803027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.543 [2024-07-14 09:44:38.803057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.543 qpair failed and we were unable to recover it. 00:34:54.543 [2024-07-14 09:44:38.812822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.543 [2024-07-14 09:44:38.812995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.543 [2024-07-14 09:44:38.813021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.543 [2024-07-14 09:44:38.813035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.544 [2024-07-14 09:44:38.813048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.544 [2024-07-14 09:44:38.813077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.544 qpair failed and we were unable to recover it. 00:34:54.544 [2024-07-14 09:44:38.822858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.544 [2024-07-14 09:44:38.823080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.544 [2024-07-14 09:44:38.823105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.544 [2024-07-14 09:44:38.823130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.544 [2024-07-14 09:44:38.823143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.544 [2024-07-14 09:44:38.823171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.544 qpair failed and we were unable to recover it. 00:34:54.544 [2024-07-14 09:44:38.832940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.544 [2024-07-14 09:44:38.833104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.544 [2024-07-14 09:44:38.833130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.544 [2024-07-14 09:44:38.833144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.544 [2024-07-14 09:44:38.833157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.544 [2024-07-14 09:44:38.833186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.544 qpair failed and we were unable to recover it. 00:34:54.544 [2024-07-14 09:44:38.842931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.544 [2024-07-14 09:44:38.843102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.544 [2024-07-14 09:44:38.843127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.544 [2024-07-14 09:44:38.843141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.544 [2024-07-14 09:44:38.843153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.544 [2024-07-14 09:44:38.843198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.544 qpair failed and we were unable to recover it. 00:34:54.544 [2024-07-14 09:44:38.852922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.544 [2024-07-14 09:44:38.853089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.544 [2024-07-14 09:44:38.853119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.544 [2024-07-14 09:44:38.853134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.544 [2024-07-14 09:44:38.853146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.544 [2024-07-14 09:44:38.853176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.544 qpair failed and we were unable to recover it. 00:34:54.544 [2024-07-14 09:44:38.862971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.544 [2024-07-14 09:44:38.863143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.544 [2024-07-14 09:44:38.863170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.544 [2024-07-14 09:44:38.863185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.544 [2024-07-14 09:44:38.863211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.544 [2024-07-14 09:44:38.863240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.544 qpair failed and we were unable to recover it. 00:34:54.544 [2024-07-14 09:44:38.873044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.544 [2024-07-14 09:44:38.873213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.544 [2024-07-14 09:44:38.873239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.544 [2024-07-14 09:44:38.873253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.544 [2024-07-14 09:44:38.873281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.544 [2024-07-14 09:44:38.873311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.544 qpair failed and we were unable to recover it. 00:34:54.544 [2024-07-14 09:44:38.883070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.544 [2024-07-14 09:44:38.883254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.544 [2024-07-14 09:44:38.883294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.544 [2024-07-14 09:44:38.883309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.544 [2024-07-14 09:44:38.883321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.544 [2024-07-14 09:44:38.883363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.544 qpair failed and we were unable to recover it. 00:34:54.544 [2024-07-14 09:44:38.893083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.544 [2024-07-14 09:44:38.893251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.544 [2024-07-14 09:44:38.893277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.544 [2024-07-14 09:44:38.893306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.544 [2024-07-14 09:44:38.893319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.544 [2024-07-14 09:44:38.893354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.544 qpair failed and we were unable to recover it. 00:34:54.544 [2024-07-14 09:44:38.903080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.544 [2024-07-14 09:44:38.903243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.544 [2024-07-14 09:44:38.903269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.544 [2024-07-14 09:44:38.903283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.544 [2024-07-14 09:44:38.903295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.544 [2024-07-14 09:44:38.903324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.544 qpair failed and we were unable to recover it. 00:34:54.544 [2024-07-14 09:44:38.913075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.544 [2024-07-14 09:44:38.913231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.544 [2024-07-14 09:44:38.913256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.544 [2024-07-14 09:44:38.913271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.544 [2024-07-14 09:44:38.913283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.544 [2024-07-14 09:44:38.913312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.544 qpair failed and we were unable to recover it. 00:34:54.544 [2024-07-14 09:44:38.923114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.544 [2024-07-14 09:44:38.923286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.544 [2024-07-14 09:44:38.923311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.544 [2024-07-14 09:44:38.923326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.544 [2024-07-14 09:44:38.923338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.544 [2024-07-14 09:44:38.923366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.544 qpair failed and we were unable to recover it. 00:34:54.544 [2024-07-14 09:44:38.933176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.544 [2024-07-14 09:44:38.933352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.544 [2024-07-14 09:44:38.933377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.544 [2024-07-14 09:44:38.933391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.544 [2024-07-14 09:44:38.933403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.544 [2024-07-14 09:44:38.933433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.544 qpair failed and we were unable to recover it. 00:34:54.544 [2024-07-14 09:44:38.943235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.544 [2024-07-14 09:44:38.943401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.544 [2024-07-14 09:44:38.943432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.544 [2024-07-14 09:44:38.943447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.544 [2024-07-14 09:44:38.943459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.544 [2024-07-14 09:44:38.943488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.544 qpair failed and we were unable to recover it. 00:34:54.544 [2024-07-14 09:44:38.953210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.544 [2024-07-14 09:44:38.953380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.544 [2024-07-14 09:44:38.953406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.544 [2024-07-14 09:44:38.953421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.544 [2024-07-14 09:44:38.953433] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.544 [2024-07-14 09:44:38.953474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.544 qpair failed and we were unable to recover it. 00:34:54.545 [2024-07-14 09:44:38.963235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.545 [2024-07-14 09:44:38.963397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.545 [2024-07-14 09:44:38.963423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.545 [2024-07-14 09:44:38.963437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.545 [2024-07-14 09:44:38.963449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.545 [2024-07-14 09:44:38.963495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.545 qpair failed and we were unable to recover it. 00:34:54.545 [2024-07-14 09:44:38.973294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.545 [2024-07-14 09:44:38.973462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.545 [2024-07-14 09:44:38.973487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.545 [2024-07-14 09:44:38.973501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.545 [2024-07-14 09:44:38.973514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.545 [2024-07-14 09:44:38.973543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.545 qpair failed and we were unable to recover it. 00:34:54.545 [2024-07-14 09:44:38.983305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.545 [2024-07-14 09:44:38.983474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.545 [2024-07-14 09:44:38.983499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.545 [2024-07-14 09:44:38.983513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.545 [2024-07-14 09:44:38.983533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.545 [2024-07-14 09:44:38.983575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.545 qpair failed and we were unable to recover it. 00:34:54.545 [2024-07-14 09:44:38.993382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.545 [2024-07-14 09:44:38.993543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.545 [2024-07-14 09:44:38.993569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.545 [2024-07-14 09:44:38.993583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.545 [2024-07-14 09:44:38.993610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.545 [2024-07-14 09:44:38.993639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.545 qpair failed and we were unable to recover it. 00:34:54.804 [2024-07-14 09:44:39.003355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.804 [2024-07-14 09:44:39.003520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.804 [2024-07-14 09:44:39.003545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.804 [2024-07-14 09:44:39.003560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.804 [2024-07-14 09:44:39.003572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.804 [2024-07-14 09:44:39.003600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.804 qpair failed and we were unable to recover it. 00:34:54.804 [2024-07-14 09:44:39.013386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.804 [2024-07-14 09:44:39.013549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.804 [2024-07-14 09:44:39.013575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.804 [2024-07-14 09:44:39.013589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.804 [2024-07-14 09:44:39.013601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.804 [2024-07-14 09:44:39.013631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.804 qpair failed and we were unable to recover it. 00:34:54.804 [2024-07-14 09:44:39.023442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.804 [2024-07-14 09:44:39.023607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.804 [2024-07-14 09:44:39.023631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.804 [2024-07-14 09:44:39.023646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.804 [2024-07-14 09:44:39.023658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.804 [2024-07-14 09:44:39.023703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.804 qpair failed and we were unable to recover it. 00:34:54.804 [2024-07-14 09:44:39.033463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.804 [2024-07-14 09:44:39.033648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.804 [2024-07-14 09:44:39.033673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.804 [2024-07-14 09:44:39.033688] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.804 [2024-07-14 09:44:39.033701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.804 [2024-07-14 09:44:39.033745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.804 qpair failed and we were unable to recover it. 00:34:54.804 [2024-07-14 09:44:39.043488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.804 [2024-07-14 09:44:39.043648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.804 [2024-07-14 09:44:39.043674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.804 [2024-07-14 09:44:39.043688] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.804 [2024-07-14 09:44:39.043700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.804 [2024-07-14 09:44:39.043745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.804 qpair failed and we were unable to recover it. 00:34:54.804 [2024-07-14 09:44:39.053521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.804 [2024-07-14 09:44:39.053690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.804 [2024-07-14 09:44:39.053716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.804 [2024-07-14 09:44:39.053733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.804 [2024-07-14 09:44:39.053761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.804 [2024-07-14 09:44:39.053791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.804 qpair failed and we were unable to recover it. 00:34:54.804 [2024-07-14 09:44:39.063585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.804 [2024-07-14 09:44:39.063835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.804 [2024-07-14 09:44:39.063860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.804 [2024-07-14 09:44:39.063895] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.804 [2024-07-14 09:44:39.063909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.804 [2024-07-14 09:44:39.063939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.804 qpair failed and we were unable to recover it. 00:34:54.804 [2024-07-14 09:44:39.073553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.804 [2024-07-14 09:44:39.073708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.804 [2024-07-14 09:44:39.073733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.804 [2024-07-14 09:44:39.073752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.804 [2024-07-14 09:44:39.073766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.804 [2024-07-14 09:44:39.073811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.804 qpair failed and we were unable to recover it. 00:34:54.804 [2024-07-14 09:44:39.083591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.804 [2024-07-14 09:44:39.083745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.804 [2024-07-14 09:44:39.083770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.805 [2024-07-14 09:44:39.083784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.805 [2024-07-14 09:44:39.083796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.805 [2024-07-14 09:44:39.083840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.805 qpair failed and we were unable to recover it. 00:34:54.805 [2024-07-14 09:44:39.093636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.805 [2024-07-14 09:44:39.093826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.805 [2024-07-14 09:44:39.093851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.805 [2024-07-14 09:44:39.093875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.805 [2024-07-14 09:44:39.093893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.805 [2024-07-14 09:44:39.093924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.805 qpair failed and we were unable to recover it. 00:34:54.805 [2024-07-14 09:44:39.103704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.805 [2024-07-14 09:44:39.103904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.805 [2024-07-14 09:44:39.103929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.805 [2024-07-14 09:44:39.103944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.805 [2024-07-14 09:44:39.103956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.805 [2024-07-14 09:44:39.103985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.805 qpair failed and we were unable to recover it. 00:34:54.805 [2024-07-14 09:44:39.113682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.805 [2024-07-14 09:44:39.113832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.805 [2024-07-14 09:44:39.113857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.805 [2024-07-14 09:44:39.113881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.805 [2024-07-14 09:44:39.113895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.805 [2024-07-14 09:44:39.113925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.805 qpair failed and we were unable to recover it. 00:34:54.805 [2024-07-14 09:44:39.123703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.805 [2024-07-14 09:44:39.123860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.805 [2024-07-14 09:44:39.123895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.805 [2024-07-14 09:44:39.123910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.805 [2024-07-14 09:44:39.123922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.805 [2024-07-14 09:44:39.123950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.805 qpair failed and we were unable to recover it. 00:34:54.805 [2024-07-14 09:44:39.133743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.805 [2024-07-14 09:44:39.133916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.805 [2024-07-14 09:44:39.133942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.805 [2024-07-14 09:44:39.133956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.805 [2024-07-14 09:44:39.133968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.805 [2024-07-14 09:44:39.133997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.805 qpair failed and we were unable to recover it. 00:34:54.805 [2024-07-14 09:44:39.143763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.805 [2024-07-14 09:44:39.143953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.805 [2024-07-14 09:44:39.143980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.805 [2024-07-14 09:44:39.143994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.805 [2024-07-14 09:44:39.144006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.805 [2024-07-14 09:44:39.144036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.805 qpair failed and we were unable to recover it. 00:34:54.805 [2024-07-14 09:44:39.153792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.805 [2024-07-14 09:44:39.154039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.805 [2024-07-14 09:44:39.154064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.805 [2024-07-14 09:44:39.154079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.805 [2024-07-14 09:44:39.154090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.805 [2024-07-14 09:44:39.154119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.805 qpair failed and we were unable to recover it. 00:34:54.805 [2024-07-14 09:44:39.163820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.805 [2024-07-14 09:44:39.163990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.805 [2024-07-14 09:44:39.164016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.805 [2024-07-14 09:44:39.164035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.805 [2024-07-14 09:44:39.164050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.805 [2024-07-14 09:44:39.164080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.805 qpair failed and we were unable to recover it. 00:34:54.805 [2024-07-14 09:44:39.173883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.805 [2024-07-14 09:44:39.174064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.805 [2024-07-14 09:44:39.174090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.805 [2024-07-14 09:44:39.174104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.805 [2024-07-14 09:44:39.174116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.805 [2024-07-14 09:44:39.174160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.805 qpair failed and we were unable to recover it. 00:34:54.805 [2024-07-14 09:44:39.183888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.805 [2024-07-14 09:44:39.184058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.805 [2024-07-14 09:44:39.184084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.805 [2024-07-14 09:44:39.184098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.805 [2024-07-14 09:44:39.184110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.805 [2024-07-14 09:44:39.184140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.805 qpair failed and we were unable to recover it. 00:34:54.805 [2024-07-14 09:44:39.193896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.805 [2024-07-14 09:44:39.194058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.805 [2024-07-14 09:44:39.194084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.805 [2024-07-14 09:44:39.194098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.805 [2024-07-14 09:44:39.194110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.805 [2024-07-14 09:44:39.194140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.805 qpair failed and we were unable to recover it. 00:34:54.805 [2024-07-14 09:44:39.203923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.805 [2024-07-14 09:44:39.204130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.805 [2024-07-14 09:44:39.204155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.805 [2024-07-14 09:44:39.204169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.805 [2024-07-14 09:44:39.204181] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.805 [2024-07-14 09:44:39.204210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.805 qpair failed and we were unable to recover it. 00:34:54.805 [2024-07-14 09:44:39.213979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.805 [2024-07-14 09:44:39.214185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.805 [2024-07-14 09:44:39.214210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.805 [2024-07-14 09:44:39.214224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.805 [2024-07-14 09:44:39.214236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.805 [2024-07-14 09:44:39.214269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.805 qpair failed and we were unable to recover it. 00:34:54.805 [2024-07-14 09:44:39.223988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.805 [2024-07-14 09:44:39.224153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.805 [2024-07-14 09:44:39.224179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.805 [2024-07-14 09:44:39.224193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.805 [2024-07-14 09:44:39.224205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.805 [2024-07-14 09:44:39.224248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.806 qpair failed and we were unable to recover it. 00:34:54.806 [2024-07-14 09:44:39.234000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.806 [2024-07-14 09:44:39.234153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.806 [2024-07-14 09:44:39.234178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.806 [2024-07-14 09:44:39.234192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.806 [2024-07-14 09:44:39.234204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.806 [2024-07-14 09:44:39.234233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.806 qpair failed and we were unable to recover it. 00:34:54.806 [2024-07-14 09:44:39.244051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.806 [2024-07-14 09:44:39.244211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.806 [2024-07-14 09:44:39.244237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.806 [2024-07-14 09:44:39.244254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.806 [2024-07-14 09:44:39.244269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.806 [2024-07-14 09:44:39.244298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.806 qpair failed and we were unable to recover it. 00:34:54.806 [2024-07-14 09:44:39.254075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:54.806 [2024-07-14 09:44:39.254245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:54.806 [2024-07-14 09:44:39.254275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:54.806 [2024-07-14 09:44:39.254291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:54.806 [2024-07-14 09:44:39.254303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:54.806 [2024-07-14 09:44:39.254335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:54.806 qpair failed and we were unable to recover it. 00:34:55.065 [2024-07-14 09:44:39.264116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.065 [2024-07-14 09:44:39.264322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.065 [2024-07-14 09:44:39.264347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.065 [2024-07-14 09:44:39.264361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.065 [2024-07-14 09:44:39.264373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:55.065 [2024-07-14 09:44:39.264403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:55.065 qpair failed and we were unable to recover it. 00:34:55.065 [2024-07-14 09:44:39.274131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.065 [2024-07-14 09:44:39.274290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.065 [2024-07-14 09:44:39.274316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.065 [2024-07-14 09:44:39.274330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.065 [2024-07-14 09:44:39.274343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:55.065 [2024-07-14 09:44:39.274372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:55.065 qpair failed and we were unable to recover it. 00:34:55.065 [2024-07-14 09:44:39.284186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.065 [2024-07-14 09:44:39.284343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.065 [2024-07-14 09:44:39.284368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.065 [2024-07-14 09:44:39.284383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.065 [2024-07-14 09:44:39.284395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:55.065 [2024-07-14 09:44:39.284427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:55.065 qpair failed and we were unable to recover it. 00:34:55.065 [2024-07-14 09:44:39.294255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.065 [2024-07-14 09:44:39.294454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.065 [2024-07-14 09:44:39.294479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.065 [2024-07-14 09:44:39.294494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.065 [2024-07-14 09:44:39.294506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:55.065 [2024-07-14 09:44:39.294541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:55.065 qpair failed and we were unable to recover it. 00:34:55.065 [2024-07-14 09:44:39.304213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.065 [2024-07-14 09:44:39.304436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.065 [2024-07-14 09:44:39.304462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.065 [2024-07-14 09:44:39.304476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.065 [2024-07-14 09:44:39.304488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:55.065 [2024-07-14 09:44:39.304517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:55.065 qpair failed and we were unable to recover it. 00:34:55.065 [2024-07-14 09:44:39.314230] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.065 [2024-07-14 09:44:39.314390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.065 [2024-07-14 09:44:39.314415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.065 [2024-07-14 09:44:39.314430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.065 [2024-07-14 09:44:39.314442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:55.065 [2024-07-14 09:44:39.314486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:55.065 qpair failed and we were unable to recover it. 00:34:55.065 [2024-07-14 09:44:39.324293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.065 [2024-07-14 09:44:39.324476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.065 [2024-07-14 09:44:39.324502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.065 [2024-07-14 09:44:39.324516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.065 [2024-07-14 09:44:39.324528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:55.065 [2024-07-14 09:44:39.324557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:55.065 qpair failed and we were unable to recover it. 00:34:55.065 [2024-07-14 09:44:39.334349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.065 [2024-07-14 09:44:39.334513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.065 [2024-07-14 09:44:39.334538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.065 [2024-07-14 09:44:39.334553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.065 [2024-07-14 09:44:39.334565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:55.065 [2024-07-14 09:44:39.334621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:55.065 qpair failed and we were unable to recover it. 00:34:55.065 [2024-07-14 09:44:39.344315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.065 [2024-07-14 09:44:39.344483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.065 [2024-07-14 09:44:39.344513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.065 [2024-07-14 09:44:39.344528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.065 [2024-07-14 09:44:39.344540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:55.065 [2024-07-14 09:44:39.344570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:55.065 qpair failed and we were unable to recover it. 00:34:55.065 [2024-07-14 09:44:39.354363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.065 [2024-07-14 09:44:39.354569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.065 [2024-07-14 09:44:39.354610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.065 [2024-07-14 09:44:39.354624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.065 [2024-07-14 09:44:39.354636] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:55.065 [2024-07-14 09:44:39.354665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:55.065 qpair failed and we were unable to recover it. 00:34:55.065 [2024-07-14 09:44:39.364373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.065 [2024-07-14 09:44:39.364534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.065 [2024-07-14 09:44:39.364560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.065 [2024-07-14 09:44:39.364574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.066 [2024-07-14 09:44:39.364586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:55.066 [2024-07-14 09:44:39.364615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:55.066 qpair failed and we were unable to recover it. 00:34:55.066 [2024-07-14 09:44:39.374437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.066 [2024-07-14 09:44:39.374598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.066 [2024-07-14 09:44:39.374623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.066 [2024-07-14 09:44:39.374637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.066 [2024-07-14 09:44:39.374649] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:55.066 [2024-07-14 09:44:39.374679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:55.066 qpair failed and we were unable to recover it. 00:34:55.066 [2024-07-14 09:44:39.384451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.066 [2024-07-14 09:44:39.384628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.066 [2024-07-14 09:44:39.384653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.066 [2024-07-14 09:44:39.384668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.066 [2024-07-14 09:44:39.384703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:55.066 [2024-07-14 09:44:39.384733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:55.066 qpair failed and we were unable to recover it. 00:34:55.066 [2024-07-14 09:44:39.394504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.066 [2024-07-14 09:44:39.394690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.066 [2024-07-14 09:44:39.394716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.066 [2024-07-14 09:44:39.394730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.066 [2024-07-14 09:44:39.394743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:55.066 [2024-07-14 09:44:39.394773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:55.066 qpair failed and we were unable to recover it. 00:34:55.066 [2024-07-14 09:44:39.404477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.066 [2024-07-14 09:44:39.404635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.066 [2024-07-14 09:44:39.404661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.066 [2024-07-14 09:44:39.404675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.066 [2024-07-14 09:44:39.404687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:55.066 [2024-07-14 09:44:39.404716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:55.066 qpair failed and we were unable to recover it. 00:34:55.066 [2024-07-14 09:44:39.414539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.066 [2024-07-14 09:44:39.414712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.066 [2024-07-14 09:44:39.414738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.066 [2024-07-14 09:44:39.414753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.066 [2024-07-14 09:44:39.414765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:55.066 [2024-07-14 09:44:39.414809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:55.066 qpair failed and we were unable to recover it. 00:34:55.066 [2024-07-14 09:44:39.424580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.066 [2024-07-14 09:44:39.424755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.066 [2024-07-14 09:44:39.424795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.066 [2024-07-14 09:44:39.424808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.066 [2024-07-14 09:44:39.424820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:55.066 [2024-07-14 09:44:39.424864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:55.066 qpair failed and we were unable to recover it. 00:34:55.066 [2024-07-14 09:44:39.434571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.066 [2024-07-14 09:44:39.434821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.066 [2024-07-14 09:44:39.434845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.066 [2024-07-14 09:44:39.434880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.066 [2024-07-14 09:44:39.434896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:55.066 [2024-07-14 09:44:39.434926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:55.066 qpair failed and we were unable to recover it. 00:34:55.066 [2024-07-14 09:44:39.444599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.066 [2024-07-14 09:44:39.444784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.066 [2024-07-14 09:44:39.444811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.066 [2024-07-14 09:44:39.444840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.066 [2024-07-14 09:44:39.444852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:55.066 [2024-07-14 09:44:39.444906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:55.066 qpair failed and we were unable to recover it. 00:34:55.066 [2024-07-14 09:44:39.454648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.066 [2024-07-14 09:44:39.454845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.066 [2024-07-14 09:44:39.454881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.066 [2024-07-14 09:44:39.454898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.066 [2024-07-14 09:44:39.454910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:55.066 [2024-07-14 09:44:39.454939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:55.066 qpair failed and we were unable to recover it. 00:34:55.066 [2024-07-14 09:44:39.464659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.066 [2024-07-14 09:44:39.464839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.066 [2024-07-14 09:44:39.464864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.066 [2024-07-14 09:44:39.464888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.066 [2024-07-14 09:44:39.464900] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:55.066 [2024-07-14 09:44:39.464930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:55.066 qpair failed and we were unable to recover it. 00:34:55.066 [2024-07-14 09:44:39.474706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.066 [2024-07-14 09:44:39.474906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.066 [2024-07-14 09:44:39.474932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.066 [2024-07-14 09:44:39.474947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.066 [2024-07-14 09:44:39.474968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:55.066 [2024-07-14 09:44:39.474999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:55.066 qpair failed and we were unable to recover it. 00:34:55.066 [2024-07-14 09:44:39.484719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.066 [2024-07-14 09:44:39.484894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.066 [2024-07-14 09:44:39.484920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.066 [2024-07-14 09:44:39.484935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.066 [2024-07-14 09:44:39.484948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:55.066 [2024-07-14 09:44:39.484977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:55.066 qpair failed and we were unable to recover it. 00:34:55.066 [2024-07-14 09:44:39.494757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.066 [2024-07-14 09:44:39.494927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.066 [2024-07-14 09:44:39.494953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.066 [2024-07-14 09:44:39.494968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.066 [2024-07-14 09:44:39.494980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:55.066 [2024-07-14 09:44:39.495009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:55.066 qpair failed and we were unable to recover it. 00:34:55.066 [2024-07-14 09:44:39.504791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.066 [2024-07-14 09:44:39.505006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.066 [2024-07-14 09:44:39.505032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.066 [2024-07-14 09:44:39.505046] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.066 [2024-07-14 09:44:39.505058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:55.067 [2024-07-14 09:44:39.505087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:55.067 qpair failed and we were unable to recover it. 00:34:55.067 [2024-07-14 09:44:39.514811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.067 [2024-07-14 09:44:39.514976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.067 [2024-07-14 09:44:39.515002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.067 [2024-07-14 09:44:39.515017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.067 [2024-07-14 09:44:39.515029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:55.067 [2024-07-14 09:44:39.515058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:55.067 qpair failed and we were unable to recover it. 00:34:55.325 [2024-07-14 09:44:39.524838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.325 [2024-07-14 09:44:39.525005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.325 [2024-07-14 09:44:39.525030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.325 [2024-07-14 09:44:39.525045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.325 [2024-07-14 09:44:39.525057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:55.325 [2024-07-14 09:44:39.525087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:55.325 qpair failed and we were unable to recover it. 00:34:55.325 [2024-07-14 09:44:39.534855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.325 [2024-07-14 09:44:39.535036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.325 [2024-07-14 09:44:39.535061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.325 [2024-07-14 09:44:39.535075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.325 [2024-07-14 09:44:39.535088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:55.325 [2024-07-14 09:44:39.535117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:55.325 qpair failed and we were unable to recover it. 00:34:55.325 [2024-07-14 09:44:39.544933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.325 [2024-07-14 09:44:39.545130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.325 [2024-07-14 09:44:39.545156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.325 [2024-07-14 09:44:39.545171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.325 [2024-07-14 09:44:39.545202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:55.325 [2024-07-14 09:44:39.545231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:55.325 qpair failed and we were unable to recover it. 00:34:55.325 [2024-07-14 09:44:39.554926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.325 [2024-07-14 09:44:39.555097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.325 [2024-07-14 09:44:39.555123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.325 [2024-07-14 09:44:39.555138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.325 [2024-07-14 09:44:39.555153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:55.325 [2024-07-14 09:44:39.555199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:55.325 qpair failed and we were unable to recover it. 00:34:55.325 [2024-07-14 09:44:39.564971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.325 [2024-07-14 09:44:39.565130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.325 [2024-07-14 09:44:39.565156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.325 [2024-07-14 09:44:39.565175] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.325 [2024-07-14 09:44:39.565189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:55.325 [2024-07-14 09:44:39.565219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:55.325 qpair failed and we were unable to recover it. 00:34:55.325 [2024-07-14 09:44:39.574985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.325 [2024-07-14 09:44:39.575147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.325 [2024-07-14 09:44:39.575173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.325 [2024-07-14 09:44:39.575187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.325 [2024-07-14 09:44:39.575199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:55.325 [2024-07-14 09:44:39.575240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:55.325 qpair failed and we were unable to recover it. 00:34:55.325 [2024-07-14 09:44:39.585011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.325 [2024-07-14 09:44:39.585224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.325 [2024-07-14 09:44:39.585249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.325 [2024-07-14 09:44:39.585263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.325 [2024-07-14 09:44:39.585276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1668000b90 00:34:55.325 [2024-07-14 09:44:39.585304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:55.325 qpair failed and we were unable to recover it. 00:34:55.325 [2024-07-14 09:44:39.595066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.325 [2024-07-14 09:44:39.595226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.325 [2024-07-14 09:44:39.595258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.325 [2024-07-14 09:44:39.595273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.325 [2024-07-14 09:44:39.595302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xff6600 00:34:55.325 [2024-07-14 09:44:39.595331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.325 qpair failed and we were unable to recover it. 00:34:55.325 [2024-07-14 09:44:39.605110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.325 [2024-07-14 09:44:39.605296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.325 [2024-07-14 09:44:39.605322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.325 [2024-07-14 09:44:39.605352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.325 [2024-07-14 09:44:39.605364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xff6600 00:34:55.325 [2024-07-14 09:44:39.605409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.325 qpair failed and we were unable to recover it. 00:34:55.325 [2024-07-14 09:44:39.615100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.325 [2024-07-14 09:44:39.615313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.325 [2024-07-14 09:44:39.615345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.325 [2024-07-14 09:44:39.615361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.325 [2024-07-14 09:44:39.615374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1660000b90 00:34:55.325 [2024-07-14 09:44:39.615407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:55.325 qpair failed and we were unable to recover it. 00:34:55.325 [2024-07-14 09:44:39.625138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.325 [2024-07-14 09:44:39.625332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.325 [2024-07-14 09:44:39.625359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.325 [2024-07-14 09:44:39.625374] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.325 [2024-07-14 09:44:39.625386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1660000b90 00:34:55.325 [2024-07-14 09:44:39.625416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:55.325 qpair failed and we were unable to recover it. 00:34:55.325 [2024-07-14 09:44:39.635161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.325 [2024-07-14 09:44:39.635326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.325 [2024-07-14 09:44:39.635359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.325 [2024-07-14 09:44:39.635375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.325 [2024-07-14 09:44:39.635388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f166c000b90 00:34:55.325 [2024-07-14 09:44:39.635432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:55.325 qpair failed and we were unable to recover it. 00:34:55.325 [2024-07-14 09:44:39.645208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:55.325 [2024-07-14 09:44:39.645366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:55.325 [2024-07-14 09:44:39.645394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:55.325 [2024-07-14 09:44:39.645409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:55.325 [2024-07-14 09:44:39.645421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f166c000b90 00:34:55.325 [2024-07-14 09:44:39.645451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:55.325 qpair failed and we were unable to recover it. 00:34:55.325 [2024-07-14 09:44:39.645545] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:34:55.325 A controller has encountered a failure and is being reset. 00:34:55.325 Controller properly reset. 00:34:55.325 Initializing NVMe Controllers 00:34:55.325 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:55.325 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:55.325 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:34:55.325 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:34:55.325 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:34:55.325 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:34:55.325 Initialization complete. Launching workers. 00:34:55.325 Starting thread on core 1 00:34:55.325 Starting thread on core 2 00:34:55.325 Starting thread on core 3 00:34:55.325 Starting thread on core 0 00:34:55.325 09:44:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:34:55.325 00:34:55.325 real 0m10.782s 00:34:55.325 user 0m16.771s 00:34:55.325 sys 0m5.838s 00:34:55.325 09:44:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:55.325 09:44:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:55.325 ************************************ 00:34:55.325 END TEST nvmf_target_disconnect_tc2 00:34:55.325 ************************************ 00:34:55.325 09:44:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:34:55.325 09:44:39 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:34:55.325 09:44:39 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:34:55.325 09:44:39 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:34:55.325 09:44:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:55.325 09:44:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:34:55.325 09:44:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:55.325 09:44:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:34:55.325 09:44:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:55.325 09:44:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:55.325 rmmod nvme_tcp 00:34:55.325 rmmod nvme_fabrics 00:34:55.583 rmmod nvme_keyring 00:34:55.583 09:44:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:55.583 09:44:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:34:55.583 09:44:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:34:55.583 09:44:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 898717 ']' 00:34:55.583 09:44:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 898717 00:34:55.583 09:44:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 898717 ']' 00:34:55.583 09:44:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 898717 00:34:55.583 09:44:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:34:55.583 09:44:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:55.583 09:44:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 898717 00:34:55.583 09:44:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:34:55.583 09:44:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:34:55.583 09:44:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 898717' 00:34:55.583 killing process with pid 898717 00:34:55.583 09:44:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 898717 00:34:55.583 09:44:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 898717 00:34:55.841 09:44:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:55.841 09:44:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:55.841 09:44:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:55.841 09:44:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:55.841 09:44:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:55.841 09:44:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:55.841 09:44:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:55.841 09:44:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:57.742 09:44:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:57.742 00:34:57.742 real 0m15.609s 00:34:57.742 user 0m42.929s 00:34:57.742 sys 0m7.857s 00:34:57.742 09:44:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:57.742 09:44:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:57.742 ************************************ 00:34:57.742 END TEST nvmf_target_disconnect 00:34:57.742 ************************************ 00:34:57.742 09:44:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:57.742 09:44:42 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:34:57.742 09:44:42 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:57.742 09:44:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:57.742 09:44:42 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:34:57.742 00:34:57.742 real 27m10.061s 00:34:57.742 user 73m45.083s 00:34:57.742 sys 6m27.346s 00:34:57.742 09:44:42 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:57.742 09:44:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:57.742 ************************************ 00:34:57.742 END TEST nvmf_tcp 00:34:57.742 ************************************ 00:34:57.742 09:44:42 -- common/autotest_common.sh@1142 -- # return 0 00:34:57.742 09:44:42 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:34:57.742 09:44:42 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:57.742 09:44:42 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:57.742 09:44:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:57.742 09:44:42 -- common/autotest_common.sh@10 -- # set +x 00:34:57.742 ************************************ 00:34:57.742 START TEST spdkcli_nvmf_tcp 00:34:57.742 ************************************ 00:34:57.742 09:44:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:58.000 * Looking for test storage... 00:34:58.000 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=899910 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 899910 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 899910 ']' 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:58.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:58.000 09:44:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:58.000 [2024-07-14 09:44:42.301766] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:34:58.000 [2024-07-14 09:44:42.301851] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid899910 ] 00:34:58.000 EAL: No free 2048 kB hugepages reported on node 1 00:34:58.000 [2024-07-14 09:44:42.358316] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:58.000 [2024-07-14 09:44:42.444127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:58.000 [2024-07-14 09:44:42.444130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:58.258 09:44:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:58.258 09:44:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:34:58.258 09:44:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:58.258 09:44:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:58.258 09:44:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:58.258 09:44:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:58.258 09:44:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:58.258 09:44:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:58.258 09:44:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:58.258 09:44:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:58.258 09:44:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:58.258 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:58.258 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:58.258 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:58.258 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:58.258 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:58.258 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:58.258 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:58.258 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:58.258 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:58.258 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:58.258 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:58.258 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:58.258 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:58.258 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:58.258 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:58.258 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:58.258 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:58.258 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:58.258 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:58.258 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:58.258 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:58.258 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:58.258 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:58.258 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:58.258 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:58.258 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:58.258 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:58.258 ' 00:35:00.783 [2024-07-14 09:44:45.118966] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:02.157 [2024-07-14 09:44:46.355214] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:35:04.686 [2024-07-14 09:44:48.614104] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:35:06.606 [2024-07-14 09:44:50.576250] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:35:07.977 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:35:07.977 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:35:07.977 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:35:07.977 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:35:07.977 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:35:07.977 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:35:07.977 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:35:07.977 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:07.978 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:35:07.978 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:35:07.978 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:07.978 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:07.978 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:35:07.978 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:07.978 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:07.978 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:35:07.978 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:07.978 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:07.978 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:07.978 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:07.978 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:35:07.978 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:35:07.978 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:07.978 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:35:07.978 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:07.978 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:35:07.978 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:35:07.978 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:35:07.978 09:44:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:35:07.978 09:44:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:07.978 09:44:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:07.978 09:44:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:35:07.978 09:44:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:07.978 09:44:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:07.978 09:44:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:35:07.978 09:44:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:35:08.235 09:44:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:35:08.235 09:44:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:35:08.235 09:44:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:35:08.235 09:44:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:08.235 09:44:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:08.235 09:44:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:35:08.235 09:44:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:08.235 09:44:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:08.235 09:44:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:35:08.235 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:35:08.235 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:08.236 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:35:08.236 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:35:08.236 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:35:08.236 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:35:08.236 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:08.236 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:35:08.236 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:35:08.236 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:35:08.236 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:35:08.236 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:35:08.236 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:35:08.236 ' 00:35:13.495 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:13.495 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:13.495 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:13.495 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:13.495 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:35:13.495 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:35:13.495 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:13.495 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:13.495 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:13.495 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:13.495 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:13.495 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:13.495 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:13.495 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:13.495 09:44:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:13.495 09:44:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:13.495 09:44:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:13.495 09:44:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 899910 00:35:13.495 09:44:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 899910 ']' 00:35:13.495 09:44:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 899910 00:35:13.495 09:44:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:35:13.495 09:44:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:13.495 09:44:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 899910 00:35:13.495 09:44:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:13.495 09:44:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:13.495 09:44:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 899910' 00:35:13.495 killing process with pid 899910 00:35:13.495 09:44:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 899910 00:35:13.495 09:44:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 899910 00:35:13.754 09:44:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:35:13.754 09:44:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:35:13.754 09:44:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 899910 ']' 00:35:13.754 09:44:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 899910 00:35:13.754 09:44:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 899910 ']' 00:35:13.754 09:44:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 899910 00:35:13.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (899910) - No such process 00:35:13.754 09:44:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 899910 is not found' 00:35:13.754 Process with pid 899910 is not found 00:35:13.754 09:44:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:13.754 09:44:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:13.754 09:44:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:13.754 00:35:13.754 real 0m15.952s 00:35:13.754 user 0m33.772s 00:35:13.754 sys 0m0.774s 00:35:13.754 09:44:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:13.754 09:44:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:13.754 ************************************ 00:35:13.754 END TEST spdkcli_nvmf_tcp 00:35:13.754 ************************************ 00:35:13.754 09:44:58 -- common/autotest_common.sh@1142 -- # return 0 00:35:13.754 09:44:58 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:13.754 09:44:58 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:13.754 09:44:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:13.754 09:44:58 -- common/autotest_common.sh@10 -- # set +x 00:35:13.754 ************************************ 00:35:13.754 START TEST nvmf_identify_passthru 00:35:13.754 ************************************ 00:35:13.754 09:44:58 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:14.013 * Looking for test storage... 00:35:14.013 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:14.013 09:44:58 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:14.013 09:44:58 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:35:14.014 09:44:58 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:14.014 09:44:58 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:14.014 09:44:58 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:14.014 09:44:58 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:14.014 09:44:58 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:14.014 09:44:58 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:14.014 09:44:58 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:14.014 09:44:58 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:14.014 09:44:58 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:14.014 09:44:58 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:14.014 09:44:58 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:14.014 09:44:58 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:14.014 09:44:58 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:14.014 09:44:58 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:14.014 09:44:58 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:14.014 09:44:58 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:14.014 09:44:58 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:14.014 09:44:58 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:14.014 09:44:58 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:14.014 09:44:58 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:14.014 09:44:58 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.014 09:44:58 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.014 09:44:58 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.014 09:44:58 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:14.014 09:44:58 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.014 09:44:58 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:35:14.014 09:44:58 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:14.014 09:44:58 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:14.014 09:44:58 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:14.014 09:44:58 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:14.014 09:44:58 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:14.014 09:44:58 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:14.014 09:44:58 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:14.014 09:44:58 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:14.014 09:44:58 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:14.014 09:44:58 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:14.014 09:44:58 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:14.014 09:44:58 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:14.014 09:44:58 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.014 09:44:58 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.014 09:44:58 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.014 09:44:58 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:14.014 09:44:58 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.014 09:44:58 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:35:14.014 09:44:58 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:14.014 09:44:58 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:14.014 09:44:58 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:14.014 09:44:58 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:14.014 09:44:58 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:14.014 09:44:58 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:14.014 09:44:58 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:14.014 09:44:58 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:14.014 09:44:58 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:14.014 09:44:58 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:14.014 09:44:58 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:35:14.014 09:44:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:15.922 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:15.922 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:15.922 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:15.922 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:15.922 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:35:15.923 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:15.923 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:15.923 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:15.923 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:15.923 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:15.923 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:15.923 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:15.923 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:15.923 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:15.923 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:15.923 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:15.923 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:15.923 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:15.923 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:15.923 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:15.923 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:15.923 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:15.923 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:15.923 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:15.923 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:15.923 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:15.923 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:15.923 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:15.923 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:15.923 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.329 ms 00:35:15.923 00:35:15.923 --- 10.0.0.2 ping statistics --- 00:35:15.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:15.923 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:35:15.923 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:15.923 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:15.923 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:35:15.923 00:35:15.923 --- 10.0.0.1 ping statistics --- 00:35:15.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:15.923 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:35:15.923 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:15.923 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:35:15.923 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:15.923 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:15.923 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:15.923 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:15.923 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:15.923 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:15.923 09:45:00 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:15.923 09:45:00 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:35:15.923 09:45:00 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:15.923 09:45:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:15.923 09:45:00 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:35:15.923 09:45:00 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:35:15.923 09:45:00 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:35:15.923 09:45:00 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:35:15.923 09:45:00 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:35:15.923 09:45:00 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:35:15.923 09:45:00 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:35:15.923 09:45:00 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:15.923 09:45:00 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:15.923 09:45:00 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:35:15.923 09:45:00 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:35:15.923 09:45:00 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:35:15.923 09:45:00 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:88:00.0 00:35:15.923 09:45:00 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:35:15.923 09:45:00 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:35:15.923 09:45:00 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:35:15.923 09:45:00 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:35:15.923 09:45:00 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:35:16.235 EAL: No free 2048 kB hugepages reported on node 1 00:35:20.419 09:45:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:35:20.419 09:45:04 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:35:20.419 09:45:04 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:35:20.419 09:45:04 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:35:20.419 EAL: No free 2048 kB hugepages reported on node 1 00:35:24.600 09:45:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:35:24.600 09:45:08 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:35:24.600 09:45:08 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:24.600 09:45:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:24.600 09:45:08 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:35:24.600 09:45:08 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:24.600 09:45:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:24.600 09:45:08 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=904968 00:35:24.600 09:45:08 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:24.600 09:45:08 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:24.600 09:45:08 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 904968 00:35:24.600 09:45:08 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 904968 ']' 00:35:24.600 09:45:08 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:24.600 09:45:08 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:24.600 09:45:08 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:24.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:24.600 09:45:08 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:24.600 09:45:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:24.600 [2024-07-14 09:45:08.789601] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:35:24.600 [2024-07-14 09:45:08.789680] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:24.600 EAL: No free 2048 kB hugepages reported on node 1 00:35:24.600 [2024-07-14 09:45:08.861398] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:24.600 [2024-07-14 09:45:08.960414] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:24.600 [2024-07-14 09:45:08.960484] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:24.600 [2024-07-14 09:45:08.960499] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:24.600 [2024-07-14 09:45:08.960511] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:24.600 [2024-07-14 09:45:08.960522] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:24.600 [2024-07-14 09:45:08.963886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:24.600 [2024-07-14 09:45:08.963927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:24.600 [2024-07-14 09:45:08.963986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:35:24.600 [2024-07-14 09:45:08.963990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:24.600 09:45:09 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:24.600 09:45:09 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:35:24.600 09:45:09 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:35:24.600 09:45:09 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.600 09:45:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:24.600 INFO: Log level set to 20 00:35:24.600 INFO: Requests: 00:35:24.600 { 00:35:24.600 "jsonrpc": "2.0", 00:35:24.600 "method": "nvmf_set_config", 00:35:24.600 "id": 1, 00:35:24.600 "params": { 00:35:24.600 "admin_cmd_passthru": { 00:35:24.600 "identify_ctrlr": true 00:35:24.600 } 00:35:24.600 } 00:35:24.600 } 00:35:24.600 00:35:24.600 INFO: response: 00:35:24.600 { 00:35:24.600 "jsonrpc": "2.0", 00:35:24.600 "id": 1, 00:35:24.600 "result": true 00:35:24.600 } 00:35:24.600 00:35:24.600 09:45:09 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.600 09:45:09 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:35:24.600 09:45:09 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.600 09:45:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:24.600 INFO: Setting log level to 20 00:35:24.600 INFO: Setting log level to 20 00:35:24.600 INFO: Log level set to 20 00:35:24.600 INFO: Log level set to 20 00:35:24.600 INFO: Requests: 00:35:24.600 { 00:35:24.600 "jsonrpc": "2.0", 00:35:24.600 "method": "framework_start_init", 00:35:24.600 "id": 1 00:35:24.600 } 00:35:24.600 00:35:24.600 INFO: Requests: 00:35:24.600 { 00:35:24.600 "jsonrpc": "2.0", 00:35:24.600 "method": "framework_start_init", 00:35:24.600 "id": 1 00:35:24.600 } 00:35:24.600 00:35:24.858 [2024-07-14 09:45:09.131108] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:35:24.858 INFO: response: 00:35:24.858 { 00:35:24.858 "jsonrpc": "2.0", 00:35:24.858 "id": 1, 00:35:24.858 "result": true 00:35:24.858 } 00:35:24.858 00:35:24.858 INFO: response: 00:35:24.858 { 00:35:24.858 "jsonrpc": "2.0", 00:35:24.858 "id": 1, 00:35:24.858 "result": true 00:35:24.858 } 00:35:24.858 00:35:24.858 09:45:09 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.858 09:45:09 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:24.858 09:45:09 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.858 09:45:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:24.858 INFO: Setting log level to 40 00:35:24.858 INFO: Setting log level to 40 00:35:24.858 INFO: Setting log level to 40 00:35:24.858 [2024-07-14 09:45:09.141223] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:24.858 09:45:09 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.858 09:45:09 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:35:24.858 09:45:09 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:24.858 09:45:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:24.859 09:45:09 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:35:24.859 09:45:09 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.859 09:45:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:28.135 Nvme0n1 00:35:28.135 09:45:11 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.136 09:45:11 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:28.136 09:45:12 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.136 09:45:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:28.136 09:45:12 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.136 09:45:12 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:28.136 09:45:12 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.136 09:45:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:28.136 09:45:12 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.136 09:45:12 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:28.136 09:45:12 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.136 09:45:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:28.136 [2024-07-14 09:45:12.024326] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:28.136 09:45:12 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.136 09:45:12 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:28.136 09:45:12 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.136 09:45:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:28.136 [ 00:35:28.136 { 00:35:28.136 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:28.136 "subtype": "Discovery", 00:35:28.136 "listen_addresses": [], 00:35:28.136 "allow_any_host": true, 00:35:28.136 "hosts": [] 00:35:28.136 }, 00:35:28.136 { 00:35:28.136 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:28.136 "subtype": "NVMe", 00:35:28.136 "listen_addresses": [ 00:35:28.136 { 00:35:28.136 "trtype": "TCP", 00:35:28.136 "adrfam": "IPv4", 00:35:28.136 "traddr": "10.0.0.2", 00:35:28.136 "trsvcid": "4420" 00:35:28.136 } 00:35:28.136 ], 00:35:28.136 "allow_any_host": true, 00:35:28.136 "hosts": [], 00:35:28.136 "serial_number": "SPDK00000000000001", 00:35:28.136 "model_number": "SPDK bdev Controller", 00:35:28.136 "max_namespaces": 1, 00:35:28.136 "min_cntlid": 1, 00:35:28.136 "max_cntlid": 65519, 00:35:28.136 "namespaces": [ 00:35:28.136 { 00:35:28.136 "nsid": 1, 00:35:28.136 "bdev_name": "Nvme0n1", 00:35:28.136 "name": "Nvme0n1", 00:35:28.136 "nguid": "18DF26F1E31F4242B5862FCAB97B3A30", 00:35:28.136 "uuid": "18df26f1-e31f-4242-b586-2fcab97b3a30" 00:35:28.136 } 00:35:28.136 ] 00:35:28.136 } 00:35:28.136 ] 00:35:28.136 09:45:12 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.136 09:45:12 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:28.136 09:45:12 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:28.136 09:45:12 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:28.136 EAL: No free 2048 kB hugepages reported on node 1 00:35:28.136 09:45:12 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:35:28.136 09:45:12 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:28.136 09:45:12 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:28.136 09:45:12 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:28.136 EAL: No free 2048 kB hugepages reported on node 1 00:35:28.136 09:45:12 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:35:28.136 09:45:12 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:35:28.136 09:45:12 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:35:28.136 09:45:12 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:28.136 09:45:12 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.136 09:45:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:28.136 09:45:12 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.136 09:45:12 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:28.136 09:45:12 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:28.136 09:45:12 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:28.136 09:45:12 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:35:28.136 09:45:12 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:28.136 09:45:12 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:35:28.136 09:45:12 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:28.136 09:45:12 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:28.136 rmmod nvme_tcp 00:35:28.136 rmmod nvme_fabrics 00:35:28.136 rmmod nvme_keyring 00:35:28.136 09:45:12 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:28.136 09:45:12 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:35:28.136 09:45:12 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:35:28.136 09:45:12 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 904968 ']' 00:35:28.136 09:45:12 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 904968 00:35:28.136 09:45:12 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 904968 ']' 00:35:28.136 09:45:12 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 904968 00:35:28.136 09:45:12 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:35:28.136 09:45:12 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:28.136 09:45:12 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 904968 00:35:28.136 09:45:12 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:28.136 09:45:12 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:28.136 09:45:12 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 904968' 00:35:28.136 killing process with pid 904968 00:35:28.136 09:45:12 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 904968 00:35:28.136 09:45:12 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 904968 00:35:30.050 09:45:14 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:30.050 09:45:14 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:30.050 09:45:14 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:30.050 09:45:14 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:30.050 09:45:14 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:30.050 09:45:14 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:30.050 09:45:14 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:30.050 09:45:14 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:31.953 09:45:16 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:31.953 00:35:31.953 real 0m17.877s 00:35:31.953 user 0m26.601s 00:35:31.953 sys 0m2.273s 00:35:31.953 09:45:16 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:31.953 09:45:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:31.953 ************************************ 00:35:31.953 END TEST nvmf_identify_passthru 00:35:31.953 ************************************ 00:35:31.953 09:45:16 -- common/autotest_common.sh@1142 -- # return 0 00:35:31.953 09:45:16 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:31.953 09:45:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:31.953 09:45:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:31.953 09:45:16 -- common/autotest_common.sh@10 -- # set +x 00:35:31.953 ************************************ 00:35:31.953 START TEST nvmf_dif 00:35:31.953 ************************************ 00:35:31.953 09:45:16 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:31.953 * Looking for test storage... 00:35:31.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:31.953 09:45:16 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:31.953 09:45:16 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:31.953 09:45:16 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:31.953 09:45:16 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:31.953 09:45:16 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:31.953 09:45:16 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:31.953 09:45:16 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:31.953 09:45:16 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:31.953 09:45:16 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:31.953 09:45:16 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:31.953 09:45:16 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:31.953 09:45:16 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:31.953 09:45:16 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:31.953 09:45:16 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:31.953 09:45:16 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:31.953 09:45:16 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:31.953 09:45:16 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:31.953 09:45:16 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:31.953 09:45:16 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:31.953 09:45:16 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:31.953 09:45:16 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:31.953 09:45:16 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:31.953 09:45:16 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.953 09:45:16 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.953 09:45:16 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.953 09:45:16 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:31.953 09:45:16 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.953 09:45:16 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:35:31.953 09:45:16 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:31.953 09:45:16 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:31.953 09:45:16 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:31.953 09:45:16 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:31.953 09:45:16 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:31.953 09:45:16 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:31.953 09:45:16 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:31.953 09:45:16 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:31.953 09:45:16 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:31.953 09:45:16 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:31.953 09:45:16 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:31.953 09:45:16 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:31.953 09:45:16 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:31.953 09:45:16 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:31.953 09:45:16 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:31.953 09:45:16 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:31.953 09:45:16 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:31.953 09:45:16 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:31.953 09:45:16 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:31.953 09:45:16 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:31.953 09:45:16 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:31.953 09:45:16 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:31.953 09:45:16 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:31.953 09:45:16 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:35:31.953 09:45:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:33.857 09:45:18 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:33.857 09:45:18 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:35:33.857 09:45:18 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:33.857 09:45:18 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:33.857 09:45:18 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:33.857 09:45:18 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:33.857 09:45:18 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:33.857 09:45:18 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:35:33.857 09:45:18 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:33.857 09:45:18 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:35:33.857 09:45:18 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:35:33.857 09:45:18 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:35:33.857 09:45:18 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:35:33.857 09:45:18 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:35:33.857 09:45:18 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:35:33.857 09:45:18 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:33.858 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:33.858 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:33.858 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:33.858 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:33.858 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:33.858 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:35:33.858 00:35:33.858 --- 10.0.0.2 ping statistics --- 00:35:33.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:33.858 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:33.858 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:33.858 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:35:33.858 00:35:33.858 --- 10.0.0.1 ping statistics --- 00:35:33.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:33.858 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:35:33.858 09:45:18 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:34.794 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:34.794 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:34.794 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:34.794 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:34.794 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:34.794 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:34.794 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:34.794 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:34.794 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:34.794 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:34.794 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:35.054 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:35.054 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:35.054 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:35.054 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:35.054 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:35.054 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:35.054 09:45:19 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:35.054 09:45:19 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:35.054 09:45:19 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:35.054 09:45:19 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:35.054 09:45:19 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:35.054 09:45:19 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:35.054 09:45:19 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:35.054 09:45:19 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:35.054 09:45:19 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:35.054 09:45:19 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:35.054 09:45:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:35.054 09:45:19 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=908160 00:35:35.054 09:45:19 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:35.054 09:45:19 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 908160 00:35:35.054 09:45:19 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 908160 ']' 00:35:35.054 09:45:19 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:35.054 09:45:19 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:35.054 09:45:19 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:35.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:35.054 09:45:19 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:35.054 09:45:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:35.313 [2024-07-14 09:45:19.519915] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:35:35.313 [2024-07-14 09:45:19.520003] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:35.313 EAL: No free 2048 kB hugepages reported on node 1 00:35:35.313 [2024-07-14 09:45:19.589499] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:35.313 [2024-07-14 09:45:19.678768] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:35.313 [2024-07-14 09:45:19.678834] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:35.313 [2024-07-14 09:45:19.678859] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:35.313 [2024-07-14 09:45:19.678884] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:35.313 [2024-07-14 09:45:19.678896] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:35.313 [2024-07-14 09:45:19.678926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:35.572 09:45:19 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:35.572 09:45:19 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:35:35.572 09:45:19 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:35.572 09:45:19 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:35.572 09:45:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:35.572 09:45:19 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:35.572 09:45:19 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:35.572 09:45:19 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:35.572 09:45:19 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.572 09:45:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:35.572 [2024-07-14 09:45:19.832646] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:35.572 09:45:19 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.572 09:45:19 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:35.572 09:45:19 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:35.572 09:45:19 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:35.572 09:45:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:35.572 ************************************ 00:35:35.572 START TEST fio_dif_1_default 00:35:35.572 ************************************ 00:35:35.572 09:45:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:35:35.572 09:45:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:35.572 09:45:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:35.572 09:45:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:35.572 09:45:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:35.572 09:45:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:35.572 09:45:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:35.572 09:45:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.572 09:45:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:35.572 bdev_null0 00:35:35.572 09:45:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.572 09:45:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:35.572 09:45:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.572 09:45:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:35.572 09:45:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.572 09:45:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:35.572 09:45:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.572 09:45:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:35.572 09:45:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.572 09:45:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:35.572 09:45:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.572 09:45:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:35.572 [2024-07-14 09:45:19.888952] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:35.572 09:45:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.572 09:45:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:35.573 09:45:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:35.573 09:45:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:35.573 09:45:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:35:35.573 09:45:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:35:35.573 09:45:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:35.573 09:45:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:35.573 09:45:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:35.573 { 00:35:35.573 "params": { 00:35:35.573 "name": "Nvme$subsystem", 00:35:35.573 "trtype": "$TEST_TRANSPORT", 00:35:35.573 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:35.573 "adrfam": "ipv4", 00:35:35.573 "trsvcid": "$NVMF_PORT", 00:35:35.573 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:35.573 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:35.573 "hdgst": ${hdgst:-false}, 00:35:35.573 "ddgst": ${ddgst:-false} 00:35:35.573 }, 00:35:35.573 "method": "bdev_nvme_attach_controller" 00:35:35.573 } 00:35:35.573 EOF 00:35:35.573 )") 00:35:35.573 09:45:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:35.573 09:45:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:35.573 09:45:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:35.573 09:45:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:35.573 09:45:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:35.573 09:45:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:35.573 09:45:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:35.573 09:45:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:35.573 09:45:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:35:35.573 09:45:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:35.573 09:45:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:35.573 09:45:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:35:35.573 09:45:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:35.573 09:45:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:35.573 09:45:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:35:35.573 09:45:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:35.573 09:45:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:35.573 09:45:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:35:35.573 09:45:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:35:35.573 09:45:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:35.573 "params": { 00:35:35.573 "name": "Nvme0", 00:35:35.573 "trtype": "tcp", 00:35:35.573 "traddr": "10.0.0.2", 00:35:35.573 "adrfam": "ipv4", 00:35:35.573 "trsvcid": "4420", 00:35:35.573 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:35.573 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:35.573 "hdgst": false, 00:35:35.573 "ddgst": false 00:35:35.573 }, 00:35:35.573 "method": "bdev_nvme_attach_controller" 00:35:35.573 }' 00:35:35.573 09:45:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:35.573 09:45:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:35.573 09:45:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:35.573 09:45:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:35.573 09:45:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:35.573 09:45:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:35.573 09:45:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:35.573 09:45:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:35.573 09:45:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:35.573 09:45:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:35.831 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:35.831 fio-3.35 00:35:35.831 Starting 1 thread 00:35:35.831 EAL: No free 2048 kB hugepages reported on node 1 00:35:48.062 00:35:48.062 filename0: (groupid=0, jobs=1): err= 0: pid=908390: Sun Jul 14 09:45:30 2024 00:35:48.062 read: IOPS=95, BW=381KiB/s (390kB/s)(3808KiB/10003msec) 00:35:48.062 slat (nsec): min=4834, max=54473, avg=10931.28, stdev=5006.67 00:35:48.062 clat (usec): min=40953, max=43834, avg=41995.63, stdev=260.20 00:35:48.062 lat (usec): min=40961, max=43849, avg=42006.57, stdev=260.18 00:35:48.062 clat percentiles (usec): 00:35:48.062 | 1.00th=[41157], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:35:48.062 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:35:48.062 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:48.062 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43779], 99.95th=[43779], 00:35:48.062 | 99.99th=[43779] 00:35:48.062 bw ( KiB/s): min= 352, max= 384, per=99.82%, avg=380.63, stdev=10.09, samples=19 00:35:48.062 iops : min= 88, max= 96, avg=95.16, stdev= 2.52, samples=19 00:35:48.062 lat (msec) : 50=100.00% 00:35:48.062 cpu : usr=88.32%, sys=10.67%, ctx=16, majf=0, minf=242 00:35:48.062 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:48.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.062 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.062 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.062 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:48.062 00:35:48.062 Run status group 0 (all jobs): 00:35:48.062 READ: bw=381KiB/s (390kB/s), 381KiB/s-381KiB/s (390kB/s-390kB/s), io=3808KiB (3899kB), run=10003-10003msec 00:35:48.062 09:45:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:48.062 09:45:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:48.062 09:45:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:48.062 09:45:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:48.062 09:45:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:48.062 09:45:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:48.062 09:45:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.062 09:45:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:48.062 09:45:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.062 09:45:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:48.062 09:45:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.062 09:45:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:48.062 09:45:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.062 00:35:48.062 real 0m11.099s 00:35:48.062 user 0m9.999s 00:35:48.062 sys 0m1.333s 00:35:48.062 09:45:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:48.062 09:45:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:48.062 ************************************ 00:35:48.062 END TEST fio_dif_1_default 00:35:48.062 ************************************ 00:35:48.062 09:45:30 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:35:48.062 09:45:30 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:48.062 09:45:30 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:48.062 09:45:30 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:48.062 09:45:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:48.062 ************************************ 00:35:48.062 START TEST fio_dif_1_multi_subsystems 00:35:48.062 ************************************ 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:48.062 bdev_null0 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:48.062 [2024-07-14 09:45:31.044377] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:48.062 bdev_null1 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:48.062 { 00:35:48.062 "params": { 00:35:48.062 "name": "Nvme$subsystem", 00:35:48.062 "trtype": "$TEST_TRANSPORT", 00:35:48.062 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:48.062 "adrfam": "ipv4", 00:35:48.062 "trsvcid": "$NVMF_PORT", 00:35:48.062 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:48.062 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:48.062 "hdgst": ${hdgst:-false}, 00:35:48.062 "ddgst": ${ddgst:-false} 00:35:48.062 }, 00:35:48.062 "method": "bdev_nvme_attach_controller" 00:35:48.062 } 00:35:48.062 EOF 00:35:48.062 )") 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:48.062 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:48.063 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:48.063 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:48.063 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:48.063 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:48.063 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:35:48.063 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:48.063 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:48.063 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:48.063 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:48.063 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:48.063 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:48.063 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:48.063 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:35:48.063 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:48.063 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:48.063 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:48.063 { 00:35:48.063 "params": { 00:35:48.063 "name": "Nvme$subsystem", 00:35:48.063 "trtype": "$TEST_TRANSPORT", 00:35:48.063 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:48.063 "adrfam": "ipv4", 00:35:48.063 "trsvcid": "$NVMF_PORT", 00:35:48.063 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:48.063 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:48.063 "hdgst": ${hdgst:-false}, 00:35:48.063 "ddgst": ${ddgst:-false} 00:35:48.063 }, 00:35:48.063 "method": "bdev_nvme_attach_controller" 00:35:48.063 } 00:35:48.063 EOF 00:35:48.063 )") 00:35:48.063 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:48.063 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:48.063 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:48.063 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:35:48.063 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:35:48.063 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:48.063 "params": { 00:35:48.063 "name": "Nvme0", 00:35:48.063 "trtype": "tcp", 00:35:48.063 "traddr": "10.0.0.2", 00:35:48.063 "adrfam": "ipv4", 00:35:48.063 "trsvcid": "4420", 00:35:48.063 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:48.063 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:48.063 "hdgst": false, 00:35:48.063 "ddgst": false 00:35:48.063 }, 00:35:48.063 "method": "bdev_nvme_attach_controller" 00:35:48.063 },{ 00:35:48.063 "params": { 00:35:48.063 "name": "Nvme1", 00:35:48.063 "trtype": "tcp", 00:35:48.063 "traddr": "10.0.0.2", 00:35:48.063 "adrfam": "ipv4", 00:35:48.063 "trsvcid": "4420", 00:35:48.063 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:48.063 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:48.063 "hdgst": false, 00:35:48.063 "ddgst": false 00:35:48.063 }, 00:35:48.063 "method": "bdev_nvme_attach_controller" 00:35:48.063 }' 00:35:48.063 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:48.063 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:48.063 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:48.063 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:48.063 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:48.063 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:48.063 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:48.063 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:48.063 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:48.063 09:45:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:48.063 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:48.063 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:48.063 fio-3.35 00:35:48.063 Starting 2 threads 00:35:48.063 EAL: No free 2048 kB hugepages reported on node 1 00:35:58.023 00:35:58.023 filename0: (groupid=0, jobs=1): err= 0: pid=909787: Sun Jul 14 09:45:42 2024 00:35:58.023 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10034msec) 00:35:58.023 slat (nsec): min=4869, max=30913, avg=9929.09, stdev=3084.34 00:35:58.023 clat (usec): min=40982, max=45699, avg=41951.28, stdev=317.59 00:35:58.023 lat (usec): min=40990, max=45712, avg=41961.21, stdev=317.59 00:35:58.023 clat percentiles (usec): 00:35:58.023 | 1.00th=[41157], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:35:58.023 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:35:58.023 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:58.023 | 99.00th=[42730], 99.50th=[43254], 99.90th=[45876], 99.95th=[45876], 00:35:58.023 | 99.99th=[45876] 00:35:58.023 bw ( KiB/s): min= 352, max= 384, per=49.75%, avg=380.80, stdev= 9.85, samples=20 00:35:58.023 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:35:58.023 lat (msec) : 50=100.00% 00:35:58.023 cpu : usr=94.16%, sys=5.57%, ctx=17, majf=0, minf=101 00:35:58.023 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:58.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.023 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.023 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:58.023 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:58.023 filename1: (groupid=0, jobs=1): err= 0: pid=909788: Sun Jul 14 09:45:42 2024 00:35:58.023 read: IOPS=95, BW=383KiB/s (392kB/s)(3840KiB/10034msec) 00:35:58.023 slat (nsec): min=4450, max=33927, avg=9649.40, stdev=2576.54 00:35:58.023 clat (usec): min=40938, max=44718, avg=41777.31, stdev=443.87 00:35:58.023 lat (usec): min=40947, max=44733, avg=41786.96, stdev=443.89 00:35:58.023 clat percentiles (usec): 00:35:58.023 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:58.023 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:35:58.023 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:58.023 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:35:58.023 | 99.99th=[44827] 00:35:58.023 bw ( KiB/s): min= 352, max= 384, per=50.01%, avg=382.40, stdev= 7.16, samples=20 00:35:58.023 iops : min= 88, max= 96, avg=95.60, stdev= 1.79, samples=20 00:35:58.023 lat (msec) : 50=100.00% 00:35:58.023 cpu : usr=94.55%, sys=5.16%, ctx=28, majf=0, minf=143 00:35:58.023 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:58.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.023 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.023 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:58.023 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:58.023 00:35:58.024 Run status group 0 (all jobs): 00:35:58.024 READ: bw=764KiB/s (782kB/s), 381KiB/s-383KiB/s (390kB/s-392kB/s), io=7664KiB (7848kB), run=10034-10034msec 00:35:58.024 09:45:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:58.024 09:45:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:58.024 09:45:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:58.024 09:45:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:58.024 09:45:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:58.024 09:45:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:58.024 09:45:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.024 09:45:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:58.024 09:45:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.024 09:45:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:58.024 09:45:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.024 09:45:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:58.024 09:45:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.024 09:45:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:58.024 09:45:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:58.024 09:45:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:58.024 09:45:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:58.024 09:45:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.024 09:45:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:58.024 09:45:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.024 09:45:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:58.024 09:45:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.024 09:45:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:58.024 09:45:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.024 00:35:58.024 real 0m11.298s 00:35:58.024 user 0m20.086s 00:35:58.024 sys 0m1.352s 00:35:58.024 09:45:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:58.024 09:45:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:58.024 ************************************ 00:35:58.024 END TEST fio_dif_1_multi_subsystems 00:35:58.024 ************************************ 00:35:58.024 09:45:42 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:35:58.024 09:45:42 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:58.024 09:45:42 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:58.024 09:45:42 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:58.024 09:45:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:58.024 ************************************ 00:35:58.024 START TEST fio_dif_rand_params 00:35:58.024 ************************************ 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:58.024 bdev_null0 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:58.024 [2024-07-14 09:45:42.384331] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:58.024 { 00:35:58.024 "params": { 00:35:58.024 "name": "Nvme$subsystem", 00:35:58.024 "trtype": "$TEST_TRANSPORT", 00:35:58.024 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:58.024 "adrfam": "ipv4", 00:35:58.024 "trsvcid": "$NVMF_PORT", 00:35:58.024 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:58.024 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:58.024 "hdgst": ${hdgst:-false}, 00:35:58.024 "ddgst": ${ddgst:-false} 00:35:58.024 }, 00:35:58.024 "method": "bdev_nvme_attach_controller" 00:35:58.024 } 00:35:58.024 EOF 00:35:58.024 )") 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:58.024 "params": { 00:35:58.024 "name": "Nvme0", 00:35:58.024 "trtype": "tcp", 00:35:58.024 "traddr": "10.0.0.2", 00:35:58.024 "adrfam": "ipv4", 00:35:58.024 "trsvcid": "4420", 00:35:58.024 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:58.024 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:58.024 "hdgst": false, 00:35:58.024 "ddgst": false 00:35:58.024 }, 00:35:58.024 "method": "bdev_nvme_attach_controller" 00:35:58.024 }' 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:58.024 09:45:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:58.025 09:45:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:58.025 09:45:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:58.025 09:45:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:58.025 09:45:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:58.025 09:45:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:58.025 09:45:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:58.025 09:45:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:58.025 09:45:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:58.283 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:58.283 ... 00:35:58.283 fio-3.35 00:35:58.283 Starting 3 threads 00:35:58.283 EAL: No free 2048 kB hugepages reported on node 1 00:36:04.847 00:36:04.847 filename0: (groupid=0, jobs=1): err= 0: pid=911182: Sun Jul 14 09:45:48 2024 00:36:04.847 read: IOPS=185, BW=23.1MiB/s (24.3MB/s)(117MiB/5048msec) 00:36:04.847 slat (nsec): min=4781, max=41340, avg=13217.65, stdev=4900.62 00:36:04.847 clat (usec): min=5924, max=56609, avg=16146.67, stdev=14571.08 00:36:04.847 lat (usec): min=5936, max=56623, avg=16159.89, stdev=14571.15 00:36:04.847 clat percentiles (usec): 00:36:04.847 | 1.00th=[ 6521], 5.00th=[ 6980], 10.00th=[ 7767], 20.00th=[ 8848], 00:36:04.847 | 30.00th=[ 9241], 40.00th=[ 9634], 50.00th=[10290], 60.00th=[11207], 00:36:04.847 | 70.00th=[12256], 80.00th=[13698], 90.00th=[50070], 95.00th=[52167], 00:36:04.847 | 99.00th=[54264], 99.50th=[55837], 99.90th=[56361], 99.95th=[56361], 00:36:04.847 | 99.99th=[56361] 00:36:04.847 bw ( KiB/s): min=18176, max=30720, per=31.53%, avg=23833.60, stdev=3870.13, samples=10 00:36:04.847 iops : min= 142, max= 240, avg=186.20, stdev=30.24, samples=10 00:36:04.847 lat (msec) : 10=46.25%, 20=39.40%, 50=3.64%, 100=10.71% 00:36:04.847 cpu : usr=92.55%, sys=6.86%, ctx=10, majf=0, minf=119 00:36:04.847 IO depths : 1=4.6%, 2=95.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:04.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.848 issued rwts: total=934,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.848 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:04.848 filename0: (groupid=0, jobs=1): err= 0: pid=911183: Sun Jul 14 09:45:48 2024 00:36:04.848 read: IOPS=209, BW=26.1MiB/s (27.4MB/s)(131MiB/5006msec) 00:36:04.848 slat (nsec): min=5579, max=69144, avg=14338.73, stdev=5841.13 00:36:04.848 clat (usec): min=5816, max=91985, avg=14321.71, stdev=13214.94 00:36:04.848 lat (usec): min=5829, max=92003, avg=14336.05, stdev=13215.34 00:36:04.848 clat percentiles (usec): 00:36:04.848 | 1.00th=[ 6128], 5.00th=[ 6718], 10.00th=[ 7177], 20.00th=[ 8225], 00:36:04.848 | 30.00th=[ 8979], 40.00th=[ 9241], 50.00th=[ 9765], 60.00th=[10552], 00:36:04.848 | 70.00th=[11731], 80.00th=[13042], 90.00th=[49021], 95.00th=[51643], 00:36:04.848 | 99.00th=[54789], 99.50th=[55837], 99.90th=[56361], 99.95th=[91751], 00:36:04.848 | 99.99th=[91751] 00:36:04.848 bw ( KiB/s): min=22528, max=31744, per=35.36%, avg=26731.30, stdev=3032.90, samples=10 00:36:04.848 iops : min= 176, max= 248, avg=208.80, stdev=23.72, samples=10 00:36:04.848 lat (msec) : 10=53.96%, 20=35.53%, 50=2.01%, 100=8.50% 00:36:04.848 cpu : usr=91.73%, sys=7.59%, ctx=11, majf=0, minf=174 00:36:04.848 IO depths : 1=1.3%, 2=98.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:04.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.848 issued rwts: total=1047,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.848 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:04.848 filename0: (groupid=0, jobs=1): err= 0: pid=911184: Sun Jul 14 09:45:48 2024 00:36:04.848 read: IOPS=198, BW=24.8MiB/s (26.0MB/s)(125MiB/5047msec) 00:36:04.848 slat (nsec): min=7239, max=39643, avg=14448.63, stdev=4881.68 00:36:04.848 clat (usec): min=5797, max=93999, avg=15075.66, stdev=14074.87 00:36:04.848 lat (usec): min=5808, max=94013, avg=15090.11, stdev=14074.81 00:36:04.848 clat percentiles (usec): 00:36:04.848 | 1.00th=[ 6325], 5.00th=[ 6718], 10.00th=[ 7046], 20.00th=[ 8094], 00:36:04.848 | 30.00th=[ 8979], 40.00th=[ 9372], 50.00th=[ 9765], 60.00th=[10683], 00:36:04.848 | 70.00th=[12256], 80.00th=[13435], 90.00th=[50070], 95.00th=[52691], 00:36:04.848 | 99.00th=[54789], 99.50th=[55313], 99.90th=[93848], 99.95th=[93848], 00:36:04.848 | 99.99th=[93848] 00:36:04.848 bw ( KiB/s): min=19200, max=34304, per=33.77%, avg=25527.60, stdev=5215.65, samples=10 00:36:04.848 iops : min= 150, max= 268, avg=199.40, stdev=40.77, samples=10 00:36:04.848 lat (msec) : 10=52.40%, 20=35.50%, 50=1.60%, 100=10.50% 00:36:04.848 cpu : usr=92.31%, sys=7.17%, ctx=14, majf=0, minf=43 00:36:04.848 IO depths : 1=1.3%, 2=98.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:04.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.848 issued rwts: total=1000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.848 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:04.848 00:36:04.848 Run status group 0 (all jobs): 00:36:04.848 READ: bw=73.8MiB/s (77.4MB/s), 23.1MiB/s-26.1MiB/s (24.3MB/s-27.4MB/s), io=373MiB (391MB), run=5006-5048msec 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.848 bdev_null0 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.848 [2024-07-14 09:45:48.487918] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.848 bdev_null1 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.848 bdev_null2 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:04.848 { 00:36:04.848 "params": { 00:36:04.848 "name": "Nvme$subsystem", 00:36:04.848 "trtype": "$TEST_TRANSPORT", 00:36:04.848 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:04.848 "adrfam": "ipv4", 00:36:04.848 "trsvcid": "$NVMF_PORT", 00:36:04.848 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:04.848 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:04.848 "hdgst": ${hdgst:-false}, 00:36:04.848 "ddgst": ${ddgst:-false} 00:36:04.848 }, 00:36:04.848 "method": "bdev_nvme_attach_controller" 00:36:04.848 } 00:36:04.848 EOF 00:36:04.848 )") 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:04.848 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:04.849 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:04.849 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:04.849 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:04.849 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:04.849 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:04.849 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:04.849 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:36:04.849 09:45:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:04.849 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:04.849 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:04.849 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:04.849 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:04.849 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:04.849 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:04.849 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:36:04.849 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:04.849 09:45:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:04.849 09:45:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:04.849 { 00:36:04.849 "params": { 00:36:04.849 "name": "Nvme$subsystem", 00:36:04.849 "trtype": "$TEST_TRANSPORT", 00:36:04.849 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:04.849 "adrfam": "ipv4", 00:36:04.849 "trsvcid": "$NVMF_PORT", 00:36:04.849 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:04.849 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:04.849 "hdgst": ${hdgst:-false}, 00:36:04.849 "ddgst": ${ddgst:-false} 00:36:04.849 }, 00:36:04.849 "method": "bdev_nvme_attach_controller" 00:36:04.849 } 00:36:04.849 EOF 00:36:04.849 )") 00:36:04.849 09:45:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:04.849 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:04.849 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:04.849 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:04.849 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:04.849 09:45:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:04.849 09:45:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:04.849 09:45:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:04.849 { 00:36:04.849 "params": { 00:36:04.849 "name": "Nvme$subsystem", 00:36:04.849 "trtype": "$TEST_TRANSPORT", 00:36:04.849 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:04.849 "adrfam": "ipv4", 00:36:04.849 "trsvcid": "$NVMF_PORT", 00:36:04.849 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:04.849 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:04.849 "hdgst": ${hdgst:-false}, 00:36:04.849 "ddgst": ${ddgst:-false} 00:36:04.849 }, 00:36:04.849 "method": "bdev_nvme_attach_controller" 00:36:04.849 } 00:36:04.849 EOF 00:36:04.849 )") 00:36:04.849 09:45:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:04.849 09:45:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:36:04.849 09:45:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:36:04.849 09:45:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:04.849 "params": { 00:36:04.849 "name": "Nvme0", 00:36:04.849 "trtype": "tcp", 00:36:04.849 "traddr": "10.0.0.2", 00:36:04.849 "adrfam": "ipv4", 00:36:04.849 "trsvcid": "4420", 00:36:04.849 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:04.849 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:04.849 "hdgst": false, 00:36:04.849 "ddgst": false 00:36:04.849 }, 00:36:04.849 "method": "bdev_nvme_attach_controller" 00:36:04.849 },{ 00:36:04.849 "params": { 00:36:04.849 "name": "Nvme1", 00:36:04.849 "trtype": "tcp", 00:36:04.849 "traddr": "10.0.0.2", 00:36:04.849 "adrfam": "ipv4", 00:36:04.849 "trsvcid": "4420", 00:36:04.849 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:04.849 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:04.849 "hdgst": false, 00:36:04.849 "ddgst": false 00:36:04.849 }, 00:36:04.849 "method": "bdev_nvme_attach_controller" 00:36:04.849 },{ 00:36:04.849 "params": { 00:36:04.849 "name": "Nvme2", 00:36:04.849 "trtype": "tcp", 00:36:04.849 "traddr": "10.0.0.2", 00:36:04.849 "adrfam": "ipv4", 00:36:04.849 "trsvcid": "4420", 00:36:04.849 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:36:04.849 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:36:04.849 "hdgst": false, 00:36:04.849 "ddgst": false 00:36:04.849 }, 00:36:04.849 "method": "bdev_nvme_attach_controller" 00:36:04.849 }' 00:36:04.849 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:04.849 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:04.849 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:04.849 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:04.849 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:04.849 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:04.849 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:04.849 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:04.849 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:04.849 09:45:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:04.849 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:04.849 ... 00:36:04.849 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:04.849 ... 00:36:04.849 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:04.849 ... 00:36:04.849 fio-3.35 00:36:04.849 Starting 24 threads 00:36:04.849 EAL: No free 2048 kB hugepages reported on node 1 00:36:17.053 00:36:17.053 filename0: (groupid=0, jobs=1): err= 0: pid=912042: Sun Jul 14 09:45:59 2024 00:36:17.053 read: IOPS=459, BW=1839KiB/s (1883kB/s)(18.0MiB/10003msec) 00:36:17.053 slat (nsec): min=8069, max=97443, avg=31918.91, stdev=16236.89 00:36:17.053 clat (usec): min=8402, max=94580, avg=34540.94, stdev=4801.43 00:36:17.053 lat (usec): min=8411, max=94609, avg=34572.86, stdev=4800.65 00:36:17.053 clat percentiles (usec): 00:36:17.053 | 1.00th=[26346], 5.00th=[32375], 10.00th=[32900], 20.00th=[33162], 00:36:17.053 | 30.00th=[33424], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:36:17.053 | 70.00th=[34341], 80.00th=[34866], 90.00th=[35914], 95.00th=[40633], 00:36:17.053 | 99.00th=[51119], 99.50th=[55837], 99.90th=[94897], 99.95th=[94897], 00:36:17.053 | 99.99th=[94897] 00:36:17.053 bw ( KiB/s): min= 1536, max= 1920, per=4.09%, avg=1833.89, stdev=90.96, samples=19 00:36:17.053 iops : min= 384, max= 480, avg=458.47, stdev=22.74, samples=19 00:36:17.053 lat (msec) : 10=0.04%, 20=0.35%, 50=98.30%, 100=1.30% 00:36:17.053 cpu : usr=92.72%, sys=3.38%, ctx=189, majf=0, minf=30 00:36:17.053 IO depths : 1=2.8%, 2=8.5%, 4=23.5%, 8=55.2%, 16=10.0%, 32=0.0%, >=64=0.0% 00:36:17.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.053 complete : 0=0.0%, 4=94.0%, 8=0.7%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.053 issued rwts: total=4598,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:17.053 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:17.053 filename0: (groupid=0, jobs=1): err= 0: pid=912043: Sun Jul 14 09:45:59 2024 00:36:17.053 read: IOPS=469, BW=1876KiB/s (1921kB/s)(18.3MiB/10008msec) 00:36:17.053 slat (nsec): min=8023, max=81326, avg=25545.39, stdev=11914.00 00:36:17.053 clat (usec): min=18111, max=55297, avg=33914.90, stdev=2925.85 00:36:17.053 lat (usec): min=18127, max=55313, avg=33940.44, stdev=2926.60 00:36:17.053 clat percentiles (usec): 00:36:17.053 | 1.00th=[23200], 5.00th=[31065], 10.00th=[32375], 20.00th=[33162], 00:36:17.053 | 30.00th=[33424], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:36:17.053 | 70.00th=[34341], 80.00th=[34866], 90.00th=[35390], 95.00th=[36963], 00:36:17.053 | 99.00th=[44303], 99.50th=[45876], 99.90th=[55313], 99.95th=[55313], 00:36:17.053 | 99.99th=[55313] 00:36:17.053 bw ( KiB/s): min= 1664, max= 1968, per=4.18%, avg=1875.37, stdev=76.28, samples=19 00:36:17.053 iops : min= 416, max= 492, avg=468.84, stdev=19.07, samples=19 00:36:17.053 lat (msec) : 20=0.13%, 50=99.66%, 100=0.21% 00:36:17.053 cpu : usr=92.66%, sys=3.57%, ctx=225, majf=0, minf=36 00:36:17.053 IO depths : 1=2.8%, 2=8.5%, 4=23.2%, 8=55.6%, 16=9.8%, 32=0.0%, >=64=0.0% 00:36:17.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.053 complete : 0=0.0%, 4=93.9%, 8=0.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.053 issued rwts: total=4694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:17.053 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:17.053 filename0: (groupid=0, jobs=1): err= 0: pid=912044: Sun Jul 14 09:45:59 2024 00:36:17.053 read: IOPS=473, BW=1895KiB/s (1941kB/s)(18.5MiB/10013msec) 00:36:17.053 slat (usec): min=8, max=109, avg=31.41, stdev=23.94 00:36:17.054 clat (usec): min=5902, max=55901, avg=33493.11, stdev=4135.36 00:36:17.054 lat (usec): min=5920, max=55915, avg=33524.52, stdev=4137.44 00:36:17.054 clat percentiles (usec): 00:36:17.054 | 1.00th=[14353], 5.00th=[29230], 10.00th=[32375], 20.00th=[33162], 00:36:17.054 | 30.00th=[33424], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:36:17.054 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35914], 00:36:17.054 | 99.00th=[45876], 99.50th=[52691], 99.90th=[55837], 99.95th=[55837], 00:36:17.054 | 99.99th=[55837] 00:36:17.054 bw ( KiB/s): min= 1788, max= 2176, per=4.23%, avg=1895.00, stdev=84.96, samples=20 00:36:17.054 iops : min= 447, max= 544, avg=473.75, stdev=21.24, samples=20 00:36:17.054 lat (msec) : 10=0.67%, 20=1.01%, 50=97.41%, 100=0.91% 00:36:17.054 cpu : usr=98.10%, sys=1.47%, ctx=17, majf=0, minf=26 00:36:17.054 IO depths : 1=3.7%, 2=9.7%, 4=24.1%, 8=53.6%, 16=8.9%, 32=0.0%, >=64=0.0% 00:36:17.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.054 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.054 issued rwts: total=4744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:17.054 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:17.054 filename0: (groupid=0, jobs=1): err= 0: pid=912045: Sun Jul 14 09:45:59 2024 00:36:17.054 read: IOPS=492, BW=1970KiB/s (2017kB/s)(19.2MiB/10008msec) 00:36:17.054 slat (usec): min=7, max=803, avg=24.64, stdev=18.59 00:36:17.054 clat (usec): min=13695, max=64421, avg=32308.51, stdev=5968.80 00:36:17.054 lat (usec): min=13706, max=64430, avg=32333.15, stdev=5973.03 00:36:17.054 clat percentiles (usec): 00:36:17.054 | 1.00th=[19792], 5.00th=[21627], 10.00th=[23462], 20.00th=[26346], 00:36:17.054 | 30.00th=[32637], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:36:17.054 | 70.00th=[33817], 80.00th=[34341], 90.00th=[36439], 95.00th=[42730], 00:36:17.054 | 99.00th=[49021], 99.50th=[55837], 99.90th=[57934], 99.95th=[57934], 00:36:17.054 | 99.99th=[64226] 00:36:17.054 bw ( KiB/s): min= 1664, max= 2384, per=4.40%, avg=1973.89, stdev=219.98, samples=19 00:36:17.054 iops : min= 416, max= 596, avg=493.47, stdev=54.99, samples=19 00:36:17.054 lat (msec) : 20=1.66%, 50=97.65%, 100=0.69% 00:36:17.054 cpu : usr=87.88%, sys=5.29%, ctx=184, majf=0, minf=61 00:36:17.054 IO depths : 1=2.9%, 2=5.9%, 4=14.8%, 8=66.3%, 16=10.0%, 32=0.0%, >=64=0.0% 00:36:17.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.054 complete : 0=0.0%, 4=91.3%, 8=3.5%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.054 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:17.054 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:17.054 filename0: (groupid=0, jobs=1): err= 0: pid=912046: Sun Jul 14 09:45:59 2024 00:36:17.054 read: IOPS=474, BW=1897KiB/s (1942kB/s)(18.5MiB/10008msec) 00:36:17.054 slat (usec): min=8, max=480, avg=35.54, stdev=23.58 00:36:17.054 clat (usec): min=15075, max=60668, avg=33470.28, stdev=5629.65 00:36:17.054 lat (usec): min=15144, max=60745, avg=33505.82, stdev=5630.75 00:36:17.054 clat percentiles (usec): 00:36:17.054 | 1.00th=[19530], 5.00th=[22152], 10.00th=[25560], 20.00th=[32637], 00:36:17.054 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:36:17.054 | 70.00th=[34341], 80.00th=[34866], 90.00th=[36963], 95.00th=[43779], 00:36:17.054 | 99.00th=[54789], 99.50th=[57410], 99.90th=[60556], 99.95th=[60556], 00:36:17.054 | 99.99th=[60556] 00:36:17.054 bw ( KiB/s): min= 1664, max= 2112, per=4.23%, avg=1897.26, stdev=111.02, samples=19 00:36:17.054 iops : min= 416, max= 528, avg=474.32, stdev=27.75, samples=19 00:36:17.054 lat (msec) : 20=1.75%, 50=96.90%, 100=1.35% 00:36:17.054 cpu : usr=92.29%, sys=3.99%, ctx=192, majf=0, minf=33 00:36:17.054 IO depths : 1=4.4%, 2=8.8%, 4=19.6%, 8=58.7%, 16=8.6%, 32=0.0%, >=64=0.0% 00:36:17.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.054 complete : 0=0.0%, 4=92.8%, 8=1.8%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.054 issued rwts: total=4746,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:17.054 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:17.054 filename0: (groupid=0, jobs=1): err= 0: pid=912047: Sun Jul 14 09:45:59 2024 00:36:17.054 read: IOPS=468, BW=1873KiB/s (1918kB/s)(18.3MiB/10005msec) 00:36:17.054 slat (usec): min=7, max=111, avg=35.25, stdev=20.68 00:36:17.054 clat (usec): min=14581, max=65667, avg=33894.56, stdev=3042.75 00:36:17.054 lat (usec): min=14621, max=65688, avg=33929.82, stdev=3040.93 00:36:17.054 clat percentiles (usec): 00:36:17.054 | 1.00th=[24511], 5.00th=[32113], 10.00th=[32637], 20.00th=[33162], 00:36:17.054 | 30.00th=[33424], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:36:17.054 | 70.00th=[34341], 80.00th=[34866], 90.00th=[35390], 95.00th=[36439], 00:36:17.054 | 99.00th=[41681], 99.50th=[50594], 99.90th=[65799], 99.95th=[65799], 00:36:17.054 | 99.99th=[65799] 00:36:17.054 bw ( KiB/s): min= 1664, max= 1936, per=4.16%, avg=1864.21, stdev=69.65, samples=19 00:36:17.054 iops : min= 416, max= 484, avg=466.05, stdev=17.41, samples=19 00:36:17.054 lat (msec) : 20=0.56%, 50=98.85%, 100=0.60% 00:36:17.054 cpu : usr=97.29%, sys=1.71%, ctx=47, majf=0, minf=26 00:36:17.054 IO depths : 1=1.7%, 2=7.5%, 4=23.6%, 8=56.1%, 16=11.1%, 32=0.0%, >=64=0.0% 00:36:17.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.054 complete : 0=0.0%, 4=94.1%, 8=0.5%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.054 issued rwts: total=4684,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:17.054 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:17.054 filename0: (groupid=0, jobs=1): err= 0: pid=912048: Sun Jul 14 09:45:59 2024 00:36:17.054 read: IOPS=464, BW=1860KiB/s (1904kB/s)(18.2MiB/10003msec) 00:36:17.054 slat (usec): min=8, max=127, avg=53.24, stdev=28.22 00:36:17.054 clat (usec): min=4680, max=98667, avg=34025.02, stdev=4066.14 00:36:17.054 lat (usec): min=4689, max=98702, avg=34078.26, stdev=4065.17 00:36:17.054 clat percentiles (usec): 00:36:17.054 | 1.00th=[24511], 5.00th=[32113], 10.00th=[32637], 20.00th=[32900], 00:36:17.054 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:36:17.054 | 70.00th=[34341], 80.00th=[34341], 90.00th=[35390], 95.00th=[35914], 00:36:17.054 | 99.00th=[50070], 99.50th=[53216], 99.90th=[78119], 99.95th=[78119], 00:36:17.054 | 99.99th=[99091] 00:36:17.054 bw ( KiB/s): min= 1536, max= 1936, per=4.14%, avg=1858.32, stdev=94.52, samples=19 00:36:17.054 iops : min= 384, max= 484, avg=464.58, stdev=23.63, samples=19 00:36:17.054 lat (msec) : 10=0.15%, 20=0.34%, 50=98.49%, 100=1.01% 00:36:17.054 cpu : usr=98.23%, sys=1.23%, ctx=48, majf=0, minf=33 00:36:17.054 IO depths : 1=1.3%, 2=6.7%, 4=22.7%, 8=57.7%, 16=11.6%, 32=0.0%, >=64=0.0% 00:36:17.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.054 complete : 0=0.0%, 4=93.9%, 8=0.8%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.054 issued rwts: total=4651,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:17.054 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:17.054 filename0: (groupid=0, jobs=1): err= 0: pid=912049: Sun Jul 14 09:45:59 2024 00:36:17.054 read: IOPS=464, BW=1858KiB/s (1902kB/s)(18.2MiB/10012msec) 00:36:17.054 slat (usec): min=8, max=111, avg=37.39, stdev=16.41 00:36:17.054 clat (usec): min=16103, max=66622, avg=34130.27, stdev=3142.64 00:36:17.054 lat (usec): min=16135, max=66653, avg=34167.66, stdev=3144.31 00:36:17.054 clat percentiles (usec): 00:36:17.054 | 1.00th=[25035], 5.00th=[32375], 10.00th=[32900], 20.00th=[33162], 00:36:17.054 | 30.00th=[33424], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:36:17.054 | 70.00th=[34341], 80.00th=[34866], 90.00th=[35390], 95.00th=[36963], 00:36:17.054 | 99.00th=[44827], 99.50th=[54789], 99.90th=[66323], 99.95th=[66323], 00:36:17.054 | 99.99th=[66847] 00:36:17.054 bw ( KiB/s): min= 1664, max= 1920, per=4.13%, avg=1850.11, stdev=80.04, samples=19 00:36:17.054 iops : min= 416, max= 480, avg=462.53, stdev=20.01, samples=19 00:36:17.054 lat (msec) : 20=0.43%, 50=99.01%, 100=0.56% 00:36:17.054 cpu : usr=98.28%, sys=1.32%, ctx=16, majf=0, minf=26 00:36:17.054 IO depths : 1=4.6%, 2=10.7%, 4=24.5%, 8=52.3%, 16=7.9%, 32=0.0%, >=64=0.0% 00:36:17.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.054 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.054 issued rwts: total=4650,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:17.054 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:17.054 filename1: (groupid=0, jobs=1): err= 0: pid=912050: Sun Jul 14 09:45:59 2024 00:36:17.054 read: IOPS=461, BW=1845KiB/s (1889kB/s)(18.0MiB/10012msec) 00:36:17.054 slat (usec): min=8, max=122, avg=56.68, stdev=25.54 00:36:17.054 clat (usec): min=12447, max=62172, avg=34275.30, stdev=5906.38 00:36:17.054 lat (usec): min=12531, max=62269, avg=34331.98, stdev=5906.00 00:36:17.054 clat percentiles (usec): 00:36:17.054 | 1.00th=[17433], 5.00th=[23200], 10.00th=[31065], 20.00th=[32637], 00:36:17.054 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:36:17.054 | 70.00th=[34341], 80.00th=[34866], 90.00th=[41681], 95.00th=[46400], 00:36:17.054 | 99.00th=[54264], 99.50th=[55837], 99.90th=[62129], 99.95th=[62129], 00:36:17.054 | 99.99th=[62129] 00:36:17.054 bw ( KiB/s): min= 1664, max= 2016, per=4.11%, avg=1844.80, stdev=93.17, samples=20 00:36:17.054 iops : min= 416, max= 504, avg=461.20, stdev=23.29, samples=20 00:36:17.054 lat (msec) : 20=3.12%, 50=94.56%, 100=2.32% 00:36:17.054 cpu : usr=97.29%, sys=1.78%, ctx=44, majf=0, minf=38 00:36:17.054 IO depths : 1=3.1%, 2=7.7%, 4=19.2%, 8=59.9%, 16=10.0%, 32=0.0%, >=64=0.0% 00:36:17.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.054 complete : 0=0.0%, 4=93.0%, 8=1.9%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.054 issued rwts: total=4618,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:17.054 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:17.054 filename1: (groupid=0, jobs=1): err= 0: pid=912051: Sun Jul 14 09:45:59 2024 00:36:17.054 read: IOPS=473, BW=1892KiB/s (1938kB/s)(18.5MiB/10011msec) 00:36:17.054 slat (nsec): min=5365, max=97572, avg=29423.03, stdev=14150.50 00:36:17.054 clat (usec): min=6199, max=56462, avg=33571.90, stdev=3080.32 00:36:17.054 lat (usec): min=6217, max=56475, avg=33601.32, stdev=3081.86 00:36:17.054 clat percentiles (usec): 00:36:17.054 | 1.00th=[11338], 5.00th=[32375], 10.00th=[32900], 20.00th=[33424], 00:36:17.054 | 30.00th=[33424], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:36:17.054 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:36:17.054 | 99.00th=[36963], 99.50th=[38536], 99.90th=[48497], 99.95th=[48497], 00:36:17.054 | 99.99th=[56361] 00:36:17.054 bw ( KiB/s): min= 1792, max= 2048, per=4.21%, avg=1887.80, stdev=70.33, samples=20 00:36:17.054 iops : min= 448, max= 512, avg=471.95, stdev=17.58, samples=20 00:36:17.054 lat (msec) : 10=0.68%, 20=0.38%, 50=98.90%, 100=0.04% 00:36:17.054 cpu : usr=93.58%, sys=3.15%, ctx=273, majf=0, minf=27 00:36:17.054 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:36:17.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.054 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.054 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:17.054 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:17.054 filename1: (groupid=0, jobs=1): err= 0: pid=912052: Sun Jul 14 09:45:59 2024 00:36:17.055 read: IOPS=466, BW=1868KiB/s (1913kB/s)(18.2MiB/10005msec) 00:36:17.055 slat (usec): min=7, max=123, avg=36.73, stdev=17.78 00:36:17.055 clat (usec): min=13548, max=80203, avg=33935.16, stdev=2693.09 00:36:17.055 lat (usec): min=13557, max=80222, avg=33971.89, stdev=2691.71 00:36:17.055 clat percentiles (usec): 00:36:17.055 | 1.00th=[31065], 5.00th=[32375], 10.00th=[32900], 20.00th=[33162], 00:36:17.055 | 30.00th=[33424], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:36:17.055 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:36:17.055 | 99.00th=[39584], 99.50th=[42730], 99.90th=[65799], 99.95th=[80217], 00:36:17.055 | 99.99th=[80217] 00:36:17.055 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1865.05, stdev=76.08, samples=19 00:36:17.055 iops : min= 416, max= 480, avg=466.26, stdev=19.02, samples=19 00:36:17.055 lat (msec) : 20=0.13%, 50=99.49%, 100=0.39% 00:36:17.055 cpu : usr=98.14%, sys=1.27%, ctx=53, majf=0, minf=38 00:36:17.055 IO depths : 1=4.9%, 2=11.2%, 4=25.0%, 8=51.4%, 16=7.6%, 32=0.0%, >=64=0.0% 00:36:17.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.055 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.055 issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:17.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:17.055 filename1: (groupid=0, jobs=1): err= 0: pid=912053: Sun Jul 14 09:45:59 2024 00:36:17.055 read: IOPS=468, BW=1872KiB/s (1917kB/s)(18.3MiB/10008msec) 00:36:17.055 slat (usec): min=8, max=137, avg=27.44, stdev=19.29 00:36:17.055 clat (usec): min=15537, max=78641, avg=33961.04, stdev=3415.72 00:36:17.055 lat (usec): min=15546, max=78672, avg=33988.48, stdev=3415.78 00:36:17.055 clat percentiles (usec): 00:36:17.055 | 1.00th=[24249], 5.00th=[31851], 10.00th=[32637], 20.00th=[33162], 00:36:17.055 | 30.00th=[33424], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:36:17.055 | 70.00th=[34341], 80.00th=[34866], 90.00th=[34866], 95.00th=[35914], 00:36:17.055 | 99.00th=[43254], 99.50th=[58459], 99.90th=[78119], 99.95th=[78119], 00:36:17.055 | 99.99th=[78119] 00:36:17.055 bw ( KiB/s): min= 1664, max= 2048, per=4.17%, avg=1868.63, stdev=86.42, samples=19 00:36:17.055 iops : min= 416, max= 512, avg=467.16, stdev=21.61, samples=19 00:36:17.055 lat (msec) : 20=0.38%, 50=99.02%, 100=0.60% 00:36:17.055 cpu : usr=97.73%, sys=1.61%, ctx=104, majf=0, minf=23 00:36:17.055 IO depths : 1=5.0%, 2=10.9%, 4=23.8%, 8=52.7%, 16=7.5%, 32=0.0%, >=64=0.0% 00:36:17.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.055 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.055 issued rwts: total=4684,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:17.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:17.055 filename1: (groupid=0, jobs=1): err= 0: pid=912054: Sun Jul 14 09:45:59 2024 00:36:17.055 read: IOPS=475, BW=1903KiB/s (1949kB/s)(18.6MiB/10013msec) 00:36:17.055 slat (nsec): min=8132, max=75367, avg=19849.95, stdev=11771.35 00:36:17.055 clat (usec): min=5227, max=48914, avg=33460.17, stdev=3283.65 00:36:17.055 lat (usec): min=5245, max=48925, avg=33480.02, stdev=3283.45 00:36:17.055 clat percentiles (usec): 00:36:17.055 | 1.00th=[14222], 5.00th=[31851], 10.00th=[32637], 20.00th=[33162], 00:36:17.055 | 30.00th=[33424], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:36:17.055 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:36:17.055 | 99.00th=[36963], 99.50th=[42206], 99.90th=[45876], 99.95th=[46924], 00:36:17.055 | 99.99th=[49021] 00:36:17.055 bw ( KiB/s): min= 1788, max= 2272, per=4.24%, avg=1899.00, stdev=114.06, samples=20 00:36:17.055 iops : min= 447, max= 568, avg=474.75, stdev=28.52, samples=20 00:36:17.055 lat (msec) : 10=0.80%, 20=0.76%, 50=98.45% 00:36:17.055 cpu : usr=98.26%, sys=1.34%, ctx=16, majf=0, minf=49 00:36:17.055 IO depths : 1=5.8%, 2=11.9%, 4=24.4%, 8=51.2%, 16=6.8%, 32=0.0%, >=64=0.0% 00:36:17.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.055 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.055 issued rwts: total=4764,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:17.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:17.055 filename1: (groupid=0, jobs=1): err= 0: pid=912055: Sun Jul 14 09:45:59 2024 00:36:17.055 read: IOPS=458, BW=1835KiB/s (1879kB/s)(18.0MiB/10043msec) 00:36:17.055 slat (usec): min=8, max=109, avg=35.65, stdev=20.06 00:36:17.055 clat (usec): min=16123, max=88214, avg=34492.11, stdev=4527.68 00:36:17.055 lat (usec): min=16141, max=88234, avg=34527.76, stdev=4526.80 00:36:17.055 clat percentiles (usec): 00:36:17.055 | 1.00th=[19006], 5.00th=[32113], 10.00th=[32637], 20.00th=[33162], 00:36:17.055 | 30.00th=[33424], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:36:17.055 | 70.00th=[34341], 80.00th=[34866], 90.00th=[36439], 95.00th=[43254], 00:36:17.055 | 99.00th=[51119], 99.50th=[53740], 99.90th=[63701], 99.95th=[87557], 00:36:17.055 | 99.99th=[88605] 00:36:17.055 bw ( KiB/s): min= 1536, max= 1936, per=4.10%, avg=1838.11, stdev=92.63, samples=19 00:36:17.055 iops : min= 384, max= 484, avg=459.53, stdev=23.16, samples=19 00:36:17.055 lat (msec) : 20=1.39%, 50=97.27%, 100=1.35% 00:36:17.055 cpu : usr=96.21%, sys=2.47%, ctx=159, majf=0, minf=40 00:36:17.055 IO depths : 1=0.6%, 2=5.4%, 4=20.6%, 8=60.5%, 16=12.9%, 32=0.0%, >=64=0.0% 00:36:17.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.055 complete : 0=0.0%, 4=93.5%, 8=1.7%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.055 issued rwts: total=4608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:17.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:17.055 filename1: (groupid=0, jobs=1): err= 0: pid=912056: Sun Jul 14 09:45:59 2024 00:36:17.055 read: IOPS=468, BW=1874KiB/s (1919kB/s)(18.3MiB/10004msec) 00:36:17.055 slat (nsec): min=8086, max=80709, avg=25444.90, stdev=12348.63 00:36:17.055 clat (usec): min=13550, max=51735, avg=33939.36, stdev=2465.50 00:36:17.055 lat (usec): min=13572, max=51757, avg=33964.81, stdev=2465.11 00:36:17.055 clat percentiles (usec): 00:36:17.055 | 1.00th=[23462], 5.00th=[32113], 10.00th=[32637], 20.00th=[33162], 00:36:17.055 | 30.00th=[33424], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:36:17.055 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35914], 00:36:17.055 | 99.00th=[44827], 99.50th=[45876], 99.90th=[49546], 99.95th=[49546], 00:36:17.055 | 99.99th=[51643] 00:36:17.055 bw ( KiB/s): min= 1664, max= 1976, per=4.18%, avg=1872.00, stdev=76.32, samples=19 00:36:17.055 iops : min= 416, max= 494, avg=468.00, stdev=19.08, samples=19 00:36:17.055 lat (msec) : 20=0.34%, 50=99.62%, 100=0.04% 00:36:17.055 cpu : usr=98.09%, sys=1.50%, ctx=17, majf=0, minf=27 00:36:17.055 IO depths : 1=4.6%, 2=10.7%, 4=24.5%, 8=52.3%, 16=8.0%, 32=0.0%, >=64=0.0% 00:36:17.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.055 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.055 issued rwts: total=4686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:17.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:17.055 filename1: (groupid=0, jobs=1): err= 0: pid=912057: Sun Jul 14 09:45:59 2024 00:36:17.055 read: IOPS=467, BW=1869KiB/s (1914kB/s)(18.3MiB/10011msec) 00:36:17.055 slat (nsec): min=7950, max=93038, avg=29893.83, stdev=12826.20 00:36:17.055 clat (usec): min=16444, max=66269, avg=33996.74, stdev=3761.84 00:36:17.055 lat (usec): min=16458, max=66305, avg=34026.63, stdev=3761.95 00:36:17.055 clat percentiles (usec): 00:36:17.055 | 1.00th=[22152], 5.00th=[31327], 10.00th=[32637], 20.00th=[33162], 00:36:17.055 | 30.00th=[33424], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:36:17.055 | 70.00th=[34341], 80.00th=[34866], 90.00th=[35390], 95.00th=[38536], 00:36:17.055 | 99.00th=[45351], 99.50th=[55837], 99.90th=[66323], 99.95th=[66323], 00:36:17.055 | 99.99th=[66323] 00:36:17.055 bw ( KiB/s): min= 1664, max= 1968, per=4.15%, avg=1861.89, stdev=77.69, samples=19 00:36:17.055 iops : min= 416, max= 492, avg=465.47, stdev=19.42, samples=19 00:36:17.055 lat (msec) : 20=0.41%, 50=98.72%, 100=0.88% 00:36:17.055 cpu : usr=97.26%, sys=2.07%, ctx=96, majf=0, minf=31 00:36:17.055 IO depths : 1=2.2%, 2=7.4%, 4=22.1%, 8=57.9%, 16=10.4%, 32=0.0%, >=64=0.0% 00:36:17.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.055 complete : 0=0.0%, 4=93.5%, 8=0.9%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.055 issued rwts: total=4678,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:17.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:17.055 filename2: (groupid=0, jobs=1): err= 0: pid=912058: Sun Jul 14 09:45:59 2024 00:36:17.055 read: IOPS=465, BW=1864KiB/s (1909kB/s)(18.2MiB/10005msec) 00:36:17.055 slat (usec): min=8, max=106, avg=32.54, stdev=17.21 00:36:17.055 clat (usec): min=4947, max=65486, avg=34049.11, stdev=3475.96 00:36:17.055 lat (usec): min=4956, max=65503, avg=34081.65, stdev=3477.01 00:36:17.055 clat percentiles (usec): 00:36:17.055 | 1.00th=[23987], 5.00th=[32375], 10.00th=[32900], 20.00th=[33162], 00:36:17.055 | 30.00th=[33424], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:36:17.055 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[36439], 00:36:17.055 | 99.00th=[46924], 99.50th=[55313], 99.90th=[65274], 99.95th=[65274], 00:36:17.055 | 99.99th=[65274] 00:36:17.055 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1858.32, stdev=74.87, samples=19 00:36:17.055 iops : min= 416, max= 480, avg=464.58, stdev=18.72, samples=19 00:36:17.055 lat (msec) : 10=0.13%, 20=0.71%, 50=98.35%, 100=0.82% 00:36:17.055 cpu : usr=93.30%, sys=3.33%, ctx=164, majf=0, minf=34 00:36:17.055 IO depths : 1=4.9%, 2=10.7%, 4=23.8%, 8=52.9%, 16=7.7%, 32=0.0%, >=64=0.0% 00:36:17.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.055 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.055 issued rwts: total=4662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:17.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:17.055 filename2: (groupid=0, jobs=1): err= 0: pid=912059: Sun Jul 14 09:45:59 2024 00:36:17.055 read: IOPS=467, BW=1868KiB/s (1913kB/s)(18.3MiB/10007msec) 00:36:17.055 slat (usec): min=7, max=118, avg=33.44, stdev=17.75 00:36:17.055 clat (usec): min=14199, max=67855, avg=34001.67, stdev=3784.09 00:36:17.055 lat (usec): min=14229, max=67882, avg=34035.11, stdev=3784.52 00:36:17.055 clat percentiles (usec): 00:36:17.055 | 1.00th=[23987], 5.00th=[30016], 10.00th=[32637], 20.00th=[33162], 00:36:17.055 | 30.00th=[33424], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:36:17.055 | 70.00th=[34341], 80.00th=[34866], 90.00th=[35390], 95.00th=[36963], 00:36:17.055 | 99.00th=[50594], 99.50th=[52691], 99.90th=[67634], 99.95th=[67634], 00:36:17.055 | 99.99th=[67634] 00:36:17.055 bw ( KiB/s): min= 1664, max= 1936, per=4.15%, avg=1860.21, stdev=71.52, samples=19 00:36:17.055 iops : min= 416, max= 484, avg=465.05, stdev=17.88, samples=19 00:36:17.055 lat (msec) : 20=0.73%, 50=98.25%, 100=1.03% 00:36:17.055 cpu : usr=98.07%, sys=1.51%, ctx=14, majf=0, minf=24 00:36:17.055 IO depths : 1=1.9%, 2=6.9%, 4=21.5%, 8=58.5%, 16=11.3%, 32=0.0%, >=64=0.0% 00:36:17.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.055 complete : 0=0.0%, 4=93.8%, 8=1.0%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.055 issued rwts: total=4674,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:17.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:17.055 filename2: (groupid=0, jobs=1): err= 0: pid=912060: Sun Jul 14 09:45:59 2024 00:36:17.055 read: IOPS=470, BW=1883KiB/s (1929kB/s)(18.5MiB/10050msec) 00:36:17.055 slat (usec): min=8, max=108, avg=29.45, stdev=18.01 00:36:17.056 clat (usec): min=14674, max=53029, avg=33656.98, stdev=2545.35 00:36:17.056 lat (usec): min=14686, max=53051, avg=33686.43, stdev=2543.48 00:36:17.056 clat percentiles (usec): 00:36:17.056 | 1.00th=[22152], 5.00th=[31327], 10.00th=[32375], 20.00th=[33162], 00:36:17.056 | 30.00th=[33424], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:36:17.056 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:36:17.056 | 99.00th=[42206], 99.50th=[43779], 99.90th=[47973], 99.95th=[50594], 00:36:17.056 | 99.99th=[53216] 00:36:17.056 bw ( KiB/s): min= 1792, max= 2048, per=4.22%, avg=1891.95, stdev=70.04, samples=20 00:36:17.056 iops : min= 448, max= 512, avg=472.95, stdev=17.49, samples=20 00:36:17.056 lat (msec) : 20=0.21%, 50=99.70%, 100=0.08% 00:36:17.056 cpu : usr=94.18%, sys=2.90%, ctx=92, majf=0, minf=31 00:36:17.056 IO depths : 1=1.6%, 2=6.6%, 4=20.3%, 8=59.6%, 16=11.9%, 32=0.0%, >=64=0.0% 00:36:17.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.056 complete : 0=0.0%, 4=93.3%, 8=2.0%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.056 issued rwts: total=4732,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:17.056 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:17.056 filename2: (groupid=0, jobs=1): err= 0: pid=912061: Sun Jul 14 09:45:59 2024 00:36:17.056 read: IOPS=467, BW=1870KiB/s (1915kB/s)(18.3MiB/10007msec) 00:36:17.056 slat (usec): min=8, max=110, avg=36.51, stdev=19.43 00:36:17.056 clat (usec): min=16715, max=71107, avg=33946.34, stdev=3334.43 00:36:17.056 lat (usec): min=16768, max=71134, avg=33982.84, stdev=3333.76 00:36:17.056 clat percentiles (usec): 00:36:17.056 | 1.00th=[22414], 5.00th=[30016], 10.00th=[32637], 20.00th=[33162], 00:36:17.056 | 30.00th=[33424], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:36:17.056 | 70.00th=[34341], 80.00th=[34866], 90.00th=[35390], 95.00th=[36963], 00:36:17.056 | 99.00th=[48497], 99.50th=[51119], 99.90th=[55837], 99.95th=[70779], 00:36:17.056 | 99.99th=[70779] 00:36:17.056 bw ( KiB/s): min= 1640, max= 1968, per=4.17%, avg=1868.63, stdev=82.59, samples=19 00:36:17.056 iops : min= 410, max= 492, avg=467.16, stdev=20.65, samples=19 00:36:17.056 lat (msec) : 20=0.38%, 50=99.04%, 100=0.58% 00:36:17.056 cpu : usr=97.96%, sys=1.64%, ctx=18, majf=0, minf=39 00:36:17.056 IO depths : 1=3.8%, 2=8.8%, 4=21.7%, 8=56.5%, 16=9.3%, 32=0.0%, >=64=0.0% 00:36:17.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.056 complete : 0=0.0%, 4=93.4%, 8=1.3%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.056 issued rwts: total=4678,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:17.056 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:17.056 filename2: (groupid=0, jobs=1): err= 0: pid=912062: Sun Jul 14 09:45:59 2024 00:36:17.056 read: IOPS=464, BW=1858KiB/s (1903kB/s)(18.1MiB/10002msec) 00:36:17.056 slat (nsec): min=8042, max=78324, avg=27903.54, stdev=12475.55 00:36:17.056 clat (usec): min=8971, max=86647, avg=34235.32, stdev=3678.45 00:36:17.056 lat (usec): min=8980, max=86682, avg=34263.22, stdev=3678.53 00:36:17.056 clat percentiles (usec): 00:36:17.056 | 1.00th=[25297], 5.00th=[32375], 10.00th=[32900], 20.00th=[33162], 00:36:17.056 | 30.00th=[33424], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:36:17.056 | 70.00th=[34341], 80.00th=[34866], 90.00th=[35390], 95.00th=[36439], 00:36:17.056 | 99.00th=[52691], 99.50th=[54789], 99.90th=[63177], 99.95th=[86508], 00:36:17.056 | 99.99th=[86508] 00:36:17.056 bw ( KiB/s): min= 1568, max= 1920, per=4.12%, avg=1848.21, stdev=88.45, samples=19 00:36:17.056 iops : min= 392, max= 480, avg=462.05, stdev=22.11, samples=19 00:36:17.056 lat (msec) : 10=0.04%, 20=0.39%, 50=98.36%, 100=1.21% 00:36:17.056 cpu : usr=98.05%, sys=1.57%, ctx=15, majf=0, minf=35 00:36:17.056 IO depths : 1=0.4%, 2=6.0%, 4=23.2%, 8=58.0%, 16=12.5%, 32=0.0%, >=64=0.0% 00:36:17.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.056 complete : 0=0.0%, 4=94.1%, 8=0.4%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.056 issued rwts: total=4646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:17.056 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:17.056 filename2: (groupid=0, jobs=1): err= 0: pid=912063: Sun Jul 14 09:45:59 2024 00:36:17.056 read: IOPS=476, BW=1907KiB/s (1953kB/s)(18.6MiB/10013msec) 00:36:17.056 slat (nsec): min=8063, max=80761, avg=16481.71, stdev=9842.96 00:36:17.056 clat (usec): min=5326, max=58542, avg=33421.59, stdev=4701.99 00:36:17.056 lat (usec): min=5337, max=58555, avg=33438.07, stdev=4702.10 00:36:17.056 clat percentiles (usec): 00:36:17.056 | 1.00th=[14091], 5.00th=[25035], 10.00th=[32113], 20.00th=[33162], 00:36:17.056 | 30.00th=[33424], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:36:17.056 | 70.00th=[34341], 80.00th=[34866], 90.00th=[35390], 95.00th=[36439], 00:36:17.056 | 99.00th=[48497], 99.50th=[52167], 99.90th=[58459], 99.95th=[58459], 00:36:17.056 | 99.99th=[58459] 00:36:17.056 bw ( KiB/s): min= 1792, max= 2048, per=4.24%, avg=1903.00, stdev=83.42, samples=20 00:36:17.056 iops : min= 448, max= 512, avg=475.75, stdev=20.86, samples=20 00:36:17.056 lat (msec) : 10=0.86%, 20=1.72%, 50=96.75%, 100=0.67% 00:36:17.056 cpu : usr=98.18%, sys=1.43%, ctx=18, majf=0, minf=46 00:36:17.056 IO depths : 1=4.3%, 2=10.0%, 4=23.2%, 8=54.1%, 16=8.5%, 32=0.0%, >=64=0.0% 00:36:17.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.056 complete : 0=0.0%, 4=93.9%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.056 issued rwts: total=4774,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:17.056 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:17.056 filename2: (groupid=0, jobs=1): err= 0: pid=912064: Sun Jul 14 09:45:59 2024 00:36:17.056 read: IOPS=467, BW=1871KiB/s (1915kB/s)(18.3MiB/10008msec) 00:36:17.056 slat (usec): min=7, max=120, avg=35.05, stdev=20.08 00:36:17.056 clat (usec): min=11461, max=62986, avg=33976.59, stdev=3876.31 00:36:17.056 lat (usec): min=11524, max=63000, avg=34011.64, stdev=3873.98 00:36:17.056 clat percentiles (usec): 00:36:17.056 | 1.00th=[22152], 5.00th=[27919], 10.00th=[32375], 20.00th=[33162], 00:36:17.056 | 30.00th=[33424], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:36:17.056 | 70.00th=[34341], 80.00th=[34866], 90.00th=[35390], 95.00th=[39584], 00:36:17.056 | 99.00th=[48497], 99.50th=[53216], 99.90th=[63177], 99.95th=[63177], 00:36:17.056 | 99.99th=[63177] 00:36:17.056 bw ( KiB/s): min= 1696, max= 1968, per=4.17%, avg=1869.47, stdev=67.09, samples=19 00:36:17.056 iops : min= 424, max= 492, avg=467.37, stdev=16.77, samples=19 00:36:17.056 lat (msec) : 20=0.56%, 50=98.72%, 100=0.73% 00:36:17.056 cpu : usr=98.40%, sys=1.17%, ctx=25, majf=0, minf=27 00:36:17.056 IO depths : 1=3.0%, 2=6.1%, 4=15.5%, 8=64.9%, 16=10.5%, 32=0.0%, >=64=0.0% 00:36:17.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.056 complete : 0=0.0%, 4=91.9%, 8=3.3%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.056 issued rwts: total=4680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:17.056 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:17.056 filename2: (groupid=0, jobs=1): err= 0: pid=912065: Sun Jul 14 09:45:59 2024 00:36:17.056 read: IOPS=466, BW=1866KiB/s (1911kB/s)(18.2MiB/10008msec) 00:36:17.056 slat (usec): min=8, max=119, avg=30.88, stdev=18.82 00:36:17.056 clat (usec): min=19471, max=62226, avg=34060.41, stdev=2909.41 00:36:17.056 lat (usec): min=19551, max=62321, avg=34091.29, stdev=2911.20 00:36:17.056 clat percentiles (usec): 00:36:17.056 | 1.00th=[24773], 5.00th=[32113], 10.00th=[32900], 20.00th=[33162], 00:36:17.056 | 30.00th=[33424], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:36:17.056 | 70.00th=[34341], 80.00th=[34866], 90.00th=[35390], 95.00th=[36963], 00:36:17.056 | 99.00th=[46400], 99.50th=[47973], 99.90th=[62129], 99.95th=[62129], 00:36:17.056 | 99.99th=[62129] 00:36:17.056 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1864.84, stdev=71.50, samples=19 00:36:17.056 iops : min= 416, max= 480, avg=466.21, stdev=17.87, samples=19 00:36:17.056 lat (msec) : 20=0.13%, 50=99.49%, 100=0.39% 00:36:17.056 cpu : usr=97.74%, sys=1.59%, ctx=58, majf=0, minf=30 00:36:17.056 IO depths : 1=3.3%, 2=8.9%, 4=23.4%, 8=55.1%, 16=9.3%, 32=0.0%, >=64=0.0% 00:36:17.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.056 complete : 0=0.0%, 4=93.9%, 8=0.5%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.056 issued rwts: total=4669,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:17.056 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:17.056 00:36:17.056 Run status group 0 (all jobs): 00:36:17.056 READ: bw=43.8MiB/s (45.9MB/s), 1835KiB/s-1970KiB/s (1879kB/s-2017kB/s), io=440MiB (461MB), run=10002-10050msec 00:36:17.056 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:36:17.056 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:17.056 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:17.056 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:17.056 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:17.056 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:17.056 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.056 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:17.056 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.056 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:17.056 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.056 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:17.056 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.056 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:17.056 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:17.056 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:17.056 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:17.056 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.056 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:17.056 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.056 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:17.056 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.056 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:17.056 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.056 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:17.056 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:36:17.056 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:36:17.056 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:17.056 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.056 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:17.056 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.056 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:36:17.056 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.056 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:17.056 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:17.057 bdev_null0 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:17.057 [2024-07-14 09:46:00.290570] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:17.057 bdev_null1 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:17.057 { 00:36:17.057 "params": { 00:36:17.057 "name": "Nvme$subsystem", 00:36:17.057 "trtype": "$TEST_TRANSPORT", 00:36:17.057 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:17.057 "adrfam": "ipv4", 00:36:17.057 "trsvcid": "$NVMF_PORT", 00:36:17.057 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:17.057 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:17.057 "hdgst": ${hdgst:-false}, 00:36:17.057 "ddgst": ${ddgst:-false} 00:36:17.057 }, 00:36:17.057 "method": "bdev_nvme_attach_controller" 00:36:17.057 } 00:36:17.057 EOF 00:36:17.057 )") 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:17.057 { 00:36:17.057 "params": { 00:36:17.057 "name": "Nvme$subsystem", 00:36:17.057 "trtype": "$TEST_TRANSPORT", 00:36:17.057 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:17.057 "adrfam": "ipv4", 00:36:17.057 "trsvcid": "$NVMF_PORT", 00:36:17.057 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:17.057 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:17.057 "hdgst": ${hdgst:-false}, 00:36:17.057 "ddgst": ${ddgst:-false} 00:36:17.057 }, 00:36:17.057 "method": "bdev_nvme_attach_controller" 00:36:17.057 } 00:36:17.057 EOF 00:36:17.057 )") 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:17.057 "params": { 00:36:17.057 "name": "Nvme0", 00:36:17.057 "trtype": "tcp", 00:36:17.057 "traddr": "10.0.0.2", 00:36:17.057 "adrfam": "ipv4", 00:36:17.057 "trsvcid": "4420", 00:36:17.057 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:17.057 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:17.057 "hdgst": false, 00:36:17.057 "ddgst": false 00:36:17.057 }, 00:36:17.057 "method": "bdev_nvme_attach_controller" 00:36:17.057 },{ 00:36:17.057 "params": { 00:36:17.057 "name": "Nvme1", 00:36:17.057 "trtype": "tcp", 00:36:17.057 "traddr": "10.0.0.2", 00:36:17.057 "adrfam": "ipv4", 00:36:17.057 "trsvcid": "4420", 00:36:17.057 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:17.057 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:17.057 "hdgst": false, 00:36:17.057 "ddgst": false 00:36:17.057 }, 00:36:17.057 "method": "bdev_nvme_attach_controller" 00:36:17.057 }' 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:17.057 09:46:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:17.057 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:17.057 ... 00:36:17.057 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:17.057 ... 00:36:17.057 fio-3.35 00:36:17.057 Starting 4 threads 00:36:17.057 EAL: No free 2048 kB hugepages reported on node 1 00:36:22.403 00:36:22.403 filename0: (groupid=0, jobs=1): err= 0: pid=913333: Sun Jul 14 09:46:06 2024 00:36:22.403 read: IOPS=1687, BW=13.2MiB/s (13.8MB/s)(66.0MiB/5003msec) 00:36:22.403 slat (nsec): min=5770, max=63922, avg=11562.82, stdev=5394.79 00:36:22.403 clat (usec): min=2440, max=44306, avg=4702.87, stdev=1340.99 00:36:22.403 lat (usec): min=2448, max=44325, avg=4714.43, stdev=1340.83 00:36:22.403 clat percentiles (usec): 00:36:22.403 | 1.00th=[ 3326], 5.00th=[ 3851], 10.00th=[ 4113], 20.00th=[ 4293], 00:36:22.403 | 30.00th=[ 4424], 40.00th=[ 4555], 50.00th=[ 4621], 60.00th=[ 4752], 00:36:22.403 | 70.00th=[ 4883], 80.00th=[ 4948], 90.00th=[ 5342], 95.00th=[ 5800], 00:36:22.403 | 99.00th=[ 6456], 99.50th=[ 6652], 99.90th=[ 7439], 99.95th=[44303], 00:36:22.403 | 99.99th=[44303] 00:36:22.403 bw ( KiB/s): min=12160, max=14480, per=25.12%, avg=13501.80, stdev=670.27, samples=10 00:36:22.403 iops : min= 1520, max= 1810, avg=1687.70, stdev=83.80, samples=10 00:36:22.403 lat (msec) : 4=7.41%, 10=92.49%, 50=0.09% 00:36:22.403 cpu : usr=94.68%, sys=4.82%, ctx=23, majf=0, minf=70 00:36:22.403 IO depths : 1=0.3%, 2=2.4%, 4=69.2%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:22.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.403 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.403 issued rwts: total=8445,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:22.403 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:22.403 filename0: (groupid=0, jobs=1): err= 0: pid=913334: Sun Jul 14 09:46:06 2024 00:36:22.403 read: IOPS=1674, BW=13.1MiB/s (13.7MB/s)(65.4MiB/5002msec) 00:36:22.403 slat (nsec): min=6197, max=93098, avg=12309.51, stdev=6042.94 00:36:22.403 clat (usec): min=1972, max=46773, avg=4740.69, stdev=1425.74 00:36:22.403 lat (usec): min=1979, max=46791, avg=4753.00, stdev=1425.52 00:36:22.403 clat percentiles (usec): 00:36:22.403 | 1.00th=[ 3425], 5.00th=[ 3982], 10.00th=[ 4178], 20.00th=[ 4359], 00:36:22.403 | 30.00th=[ 4424], 40.00th=[ 4555], 50.00th=[ 4621], 60.00th=[ 4686], 00:36:22.403 | 70.00th=[ 4883], 80.00th=[ 4948], 90.00th=[ 5342], 95.00th=[ 5866], 00:36:22.403 | 99.00th=[ 6849], 99.50th=[ 7111], 99.90th=[ 7635], 99.95th=[46924], 00:36:22.403 | 99.99th=[46924] 00:36:22.403 bw ( KiB/s): min=12601, max=14208, per=24.91%, avg=13389.70, stdev=442.24, samples=10 00:36:22.403 iops : min= 1575, max= 1776, avg=1673.70, stdev=55.31, samples=10 00:36:22.403 lat (msec) : 2=0.05%, 4=5.40%, 10=94.46%, 50=0.10% 00:36:22.403 cpu : usr=93.82%, sys=5.70%, ctx=10, majf=0, minf=45 00:36:22.403 IO depths : 1=0.1%, 2=2.8%, 4=69.0%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:22.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.403 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.403 issued rwts: total=8375,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:22.403 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:22.403 filename1: (groupid=0, jobs=1): err= 0: pid=913335: Sun Jul 14 09:46:06 2024 00:36:22.403 read: IOPS=1682, BW=13.1MiB/s (13.8MB/s)(65.8MiB/5004msec) 00:36:22.403 slat (nsec): min=5820, max=56928, avg=14491.14, stdev=7088.87 00:36:22.403 clat (usec): min=1434, max=8891, avg=4709.83, stdev=852.08 00:36:22.403 lat (usec): min=1446, max=8901, avg=4724.32, stdev=851.55 00:36:22.403 clat percentiles (usec): 00:36:22.403 | 1.00th=[ 2507], 5.00th=[ 3654], 10.00th=[ 4015], 20.00th=[ 4228], 00:36:22.403 | 30.00th=[ 4359], 40.00th=[ 4490], 50.00th=[ 4555], 60.00th=[ 4686], 00:36:22.403 | 70.00th=[ 4817], 80.00th=[ 4948], 90.00th=[ 5997], 95.00th=[ 6783], 00:36:22.403 | 99.00th=[ 7373], 99.50th=[ 7504], 99.90th=[ 7832], 99.95th=[ 7898], 00:36:22.404 | 99.99th=[ 8848] 00:36:22.404 bw ( KiB/s): min=12896, max=14320, per=25.04%, avg=13457.60, stdev=483.10, samples=10 00:36:22.404 iops : min= 1612, max= 1790, avg=1682.20, stdev=60.39, samples=10 00:36:22.404 lat (msec) : 2=0.14%, 4=9.54%, 10=90.32% 00:36:22.404 cpu : usr=94.92%, sys=4.60%, ctx=10, majf=0, minf=27 00:36:22.404 IO depths : 1=0.3%, 2=3.3%, 4=68.7%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:22.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.404 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.404 issued rwts: total=8419,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:22.404 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:22.404 filename1: (groupid=0, jobs=1): err= 0: pid=913336: Sun Jul 14 09:46:06 2024 00:36:22.404 read: IOPS=1674, BW=13.1MiB/s (13.7MB/s)(65.4MiB/5002msec) 00:36:22.404 slat (nsec): min=5810, max=58765, avg=12277.06, stdev=5877.81 00:36:22.404 clat (usec): min=1724, max=8841, avg=4738.75, stdev=796.67 00:36:22.404 lat (usec): min=1748, max=8849, avg=4751.03, stdev=796.08 00:36:22.404 clat percentiles (usec): 00:36:22.404 | 1.00th=[ 2573], 5.00th=[ 3851], 10.00th=[ 4113], 20.00th=[ 4293], 00:36:22.404 | 30.00th=[ 4424], 40.00th=[ 4555], 50.00th=[ 4621], 60.00th=[ 4686], 00:36:22.404 | 70.00th=[ 4817], 80.00th=[ 4948], 90.00th=[ 5800], 95.00th=[ 6587], 00:36:22.404 | 99.00th=[ 7439], 99.50th=[ 7767], 99.90th=[ 8029], 99.95th=[ 8225], 00:36:22.404 | 99.99th=[ 8848] 00:36:22.404 bw ( KiB/s): min=12624, max=14336, per=24.92%, avg=13395.20, stdev=557.42, samples=10 00:36:22.404 iops : min= 1578, max= 1792, avg=1674.40, stdev=69.68, samples=10 00:36:22.404 lat (msec) : 2=0.21%, 4=7.32%, 10=92.47% 00:36:22.404 cpu : usr=94.38%, sys=5.16%, ctx=14, majf=0, minf=46 00:36:22.404 IO depths : 1=0.1%, 2=2.0%, 4=70.8%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:22.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.404 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.404 issued rwts: total=8377,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:22.404 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:22.404 00:36:22.404 Run status group 0 (all jobs): 00:36:22.404 READ: bw=52.5MiB/s (55.0MB/s), 13.1MiB/s-13.2MiB/s (13.7MB/s-13.8MB/s), io=263MiB (275MB), run=5002-5004msec 00:36:22.404 09:46:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:36:22.404 09:46:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:22.404 09:46:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:22.404 09:46:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:22.404 09:46:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:22.404 09:46:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:22.404 09:46:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.404 09:46:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:22.404 09:46:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.404 09:46:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:22.404 09:46:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.404 09:46:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:22.404 09:46:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.404 09:46:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:22.404 09:46:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:22.404 09:46:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:22.404 09:46:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:22.404 09:46:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.404 09:46:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:22.404 09:46:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.404 09:46:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:22.404 09:46:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.404 09:46:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:22.404 09:46:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.404 00:36:22.404 real 0m24.142s 00:36:22.404 user 4m28.523s 00:36:22.404 sys 0m8.303s 00:36:22.404 09:46:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:22.404 09:46:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:22.404 ************************************ 00:36:22.404 END TEST fio_dif_rand_params 00:36:22.404 ************************************ 00:36:22.404 09:46:06 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:36:22.404 09:46:06 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:36:22.404 09:46:06 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:22.404 09:46:06 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:22.404 09:46:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:22.404 ************************************ 00:36:22.404 START TEST fio_dif_digest 00:36:22.404 ************************************ 00:36:22.404 09:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:36:22.404 09:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:36:22.404 09:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:36:22.404 09:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:36:22.404 09:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:36:22.404 09:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:36:22.404 09:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:36:22.404 09:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:36:22.404 09:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:36:22.404 09:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:36:22.404 09:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:36:22.404 09:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:36:22.404 09:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:36:22.404 09:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:36:22.404 09:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:36:22.404 09:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:36:22.404 09:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:22.404 09:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.404 09:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:22.404 bdev_null0 00:36:22.404 09:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.404 09:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:22.404 09:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.404 09:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:22.404 09:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.404 09:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:22.404 09:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.404 09:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:22.404 09:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.404 09:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:22.404 09:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.404 09:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:22.404 [2024-07-14 09:46:06.580529] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:22.404 09:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.404 09:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:36:22.404 09:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:36:22.404 09:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:22.404 09:46:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:36:22.404 09:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:22.404 09:46:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:36:22.405 09:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:22.405 09:46:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:22.405 09:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:36:22.405 09:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:22.405 09:46:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:22.405 { 00:36:22.405 "params": { 00:36:22.405 "name": "Nvme$subsystem", 00:36:22.405 "trtype": "$TEST_TRANSPORT", 00:36:22.405 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:22.405 "adrfam": "ipv4", 00:36:22.405 "trsvcid": "$NVMF_PORT", 00:36:22.405 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:22.405 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:22.405 "hdgst": ${hdgst:-false}, 00:36:22.405 "ddgst": ${ddgst:-false} 00:36:22.405 }, 00:36:22.405 "method": "bdev_nvme_attach_controller" 00:36:22.405 } 00:36:22.405 EOF 00:36:22.405 )") 00:36:22.405 09:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:22.405 09:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:36:22.405 09:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:22.405 09:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:22.405 09:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:36:22.405 09:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:36:22.405 09:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:22.405 09:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:22.405 09:46:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:36:22.405 09:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:22.405 09:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:36:22.405 09:46:06 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:36:22.405 09:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:36:22.405 09:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:22.405 09:46:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:36:22.405 09:46:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:36:22.405 09:46:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:22.405 "params": { 00:36:22.405 "name": "Nvme0", 00:36:22.405 "trtype": "tcp", 00:36:22.405 "traddr": "10.0.0.2", 00:36:22.405 "adrfam": "ipv4", 00:36:22.405 "trsvcid": "4420", 00:36:22.405 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:22.405 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:22.405 "hdgst": true, 00:36:22.405 "ddgst": true 00:36:22.405 }, 00:36:22.405 "method": "bdev_nvme_attach_controller" 00:36:22.405 }' 00:36:22.405 09:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:22.405 09:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:22.405 09:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:22.405 09:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:22.405 09:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:22.405 09:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:22.405 09:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:22.405 09:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:22.405 09:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:22.405 09:46:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:22.692 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:22.692 ... 00:36:22.692 fio-3.35 00:36:22.692 Starting 3 threads 00:36:22.692 EAL: No free 2048 kB hugepages reported on node 1 00:36:34.904 00:36:34.904 filename0: (groupid=0, jobs=1): err= 0: pid=914086: Sun Jul 14 09:46:17 2024 00:36:34.904 read: IOPS=206, BW=25.8MiB/s (27.1MB/s)(259MiB/10049msec) 00:36:34.904 slat (nsec): min=7726, max=84236, avg=13443.40, stdev=3599.88 00:36:34.904 clat (usec): min=6087, max=58587, avg=14498.33, stdev=6773.79 00:36:34.904 lat (usec): min=6099, max=58601, avg=14511.77, stdev=6773.86 00:36:34.904 clat percentiles (usec): 00:36:34.904 | 1.00th=[ 7308], 5.00th=[ 7898], 10.00th=[ 8979], 20.00th=[10290], 00:36:34.904 | 30.00th=[11731], 40.00th=[13829], 50.00th=[14746], 60.00th=[15401], 00:36:34.904 | 70.00th=[15926], 80.00th=[16319], 90.00th=[16909], 95.00th=[17695], 00:36:34.904 | 99.00th=[54789], 99.50th=[56361], 99.90th=[57410], 99.95th=[57410], 00:36:34.904 | 99.99th=[58459] 00:36:34.904 bw ( KiB/s): min=22016, max=31232, per=40.89%, avg=26511.35, stdev=2626.22, samples=20 00:36:34.904 iops : min= 172, max= 244, avg=207.10, stdev=20.52, samples=20 00:36:34.904 lat (msec) : 10=18.13%, 20=79.56%, 50=0.24%, 100=2.07% 00:36:34.904 cpu : usr=90.27%, sys=9.14%, ctx=19, majf=0, minf=202 00:36:34.904 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:34.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:34.904 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:34.904 issued rwts: total=2074,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:34.904 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:34.904 filename0: (groupid=0, jobs=1): err= 0: pid=914087: Sun Jul 14 09:46:17 2024 00:36:34.904 read: IOPS=148, BW=18.6MiB/s (19.5MB/s)(187MiB/10052msec) 00:36:34.904 slat (nsec): min=5014, max=42960, avg=13347.50, stdev=3413.07 00:36:34.904 clat (usec): min=7634, max=60023, avg=20095.43, stdev=12482.06 00:36:34.904 lat (usec): min=7646, max=60035, avg=20108.78, stdev=12482.26 00:36:34.904 clat percentiles (usec): 00:36:34.904 | 1.00th=[ 9372], 5.00th=[12125], 10.00th=[13042], 20.00th=[15008], 00:36:34.904 | 30.00th=[15664], 40.00th=[16057], 50.00th=[16581], 60.00th=[16909], 00:36:34.904 | 70.00th=[17171], 80.00th=[17957], 90.00th=[52691], 95.00th=[56886], 00:36:34.904 | 99.00th=[58983], 99.50th=[59507], 99.90th=[59507], 99.95th=[60031], 00:36:34.904 | 99.99th=[60031] 00:36:34.904 bw ( KiB/s): min=14080, max=24320, per=29.50%, avg=19121.40, stdev=2664.20, samples=20 00:36:34.904 iops : min= 110, max= 190, avg=149.35, stdev=20.84, samples=20 00:36:34.904 lat (msec) : 10=1.07%, 20=88.51%, 50=0.07%, 100=10.35% 00:36:34.904 cpu : usr=91.38%, sys=8.12%, ctx=27, majf=0, minf=119 00:36:34.904 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:34.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:34.905 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:34.905 issued rwts: total=1497,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:34.905 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:34.905 filename0: (groupid=0, jobs=1): err= 0: pid=914088: Sun Jul 14 09:46:17 2024 00:36:34.905 read: IOPS=151, BW=19.0MiB/s (19.9MB/s)(190MiB/10008msec) 00:36:34.905 slat (nsec): min=7242, max=39571, avg=13514.26, stdev=3040.64 00:36:34.905 clat (usec): min=6516, max=99067, avg=19734.47, stdev=10733.69 00:36:34.905 lat (usec): min=6529, max=99080, avg=19747.99, stdev=10733.88 00:36:34.905 clat percentiles (usec): 00:36:34.905 | 1.00th=[ 7898], 5.00th=[11731], 10.00th=[13566], 20.00th=[15401], 00:36:34.905 | 30.00th=[16450], 40.00th=[17171], 50.00th=[17695], 60.00th=[18220], 00:36:34.905 | 70.00th=[19006], 80.00th=[19530], 90.00th=[20841], 95.00th=[55837], 00:36:34.905 | 99.00th=[59507], 99.50th=[62129], 99.90th=[99091], 99.95th=[99091], 00:36:34.905 | 99.99th=[99091] 00:36:34.905 bw ( KiB/s): min=15872, max=22272, per=29.95%, avg=19417.60, stdev=1676.19, samples=20 00:36:34.905 iops : min= 124, max= 174, avg=151.70, stdev=13.10, samples=20 00:36:34.905 lat (msec) : 10=2.43%, 20=82.89%, 50=8.22%, 100=6.45% 00:36:34.905 cpu : usr=91.21%, sys=8.32%, ctx=14, majf=0, minf=110 00:36:34.905 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:34.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:34.905 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:34.905 issued rwts: total=1520,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:34.905 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:34.905 00:36:34.905 Run status group 0 (all jobs): 00:36:34.905 READ: bw=63.3MiB/s (66.4MB/s), 18.6MiB/s-25.8MiB/s (19.5MB/s-27.1MB/s), io=636MiB (667MB), run=10008-10052msec 00:36:34.905 09:46:17 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:34.905 09:46:17 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:34.905 09:46:17 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:34.905 09:46:17 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:34.905 09:46:17 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:34.905 09:46:17 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:34.905 09:46:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:34.905 09:46:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:34.905 09:46:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.905 09:46:17 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:34.905 09:46:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:34.905 09:46:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:34.905 09:46:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.905 00:36:34.905 real 0m11.107s 00:36:34.905 user 0m28.477s 00:36:34.905 sys 0m2.845s 00:36:34.905 09:46:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:34.905 09:46:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:34.905 ************************************ 00:36:34.905 END TEST fio_dif_digest 00:36:34.905 ************************************ 00:36:34.905 09:46:17 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:36:34.905 09:46:17 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:34.905 09:46:17 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:34.905 09:46:17 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:34.905 09:46:17 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:36:34.905 09:46:17 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:34.905 09:46:17 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:36:34.905 09:46:17 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:34.905 09:46:17 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:34.905 rmmod nvme_tcp 00:36:34.905 rmmod nvme_fabrics 00:36:34.905 rmmod nvme_keyring 00:36:34.905 09:46:17 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:34.905 09:46:17 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:36:34.905 09:46:17 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:36:34.905 09:46:17 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 908160 ']' 00:36:34.905 09:46:17 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 908160 00:36:34.905 09:46:17 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 908160 ']' 00:36:34.905 09:46:17 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 908160 00:36:34.905 09:46:17 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:36:34.905 09:46:17 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:34.905 09:46:17 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 908160 00:36:34.905 09:46:17 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:34.905 09:46:17 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:34.905 09:46:17 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 908160' 00:36:34.905 killing process with pid 908160 00:36:34.905 09:46:17 nvmf_dif -- common/autotest_common.sh@967 -- # kill 908160 00:36:34.905 09:46:17 nvmf_dif -- common/autotest_common.sh@972 -- # wait 908160 00:36:34.905 09:46:17 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:34.905 09:46:17 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:34.905 Waiting for block devices as requested 00:36:34.905 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:34.905 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:34.905 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:35.162 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:35.162 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:35.162 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:35.162 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:35.420 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:35.420 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:35.420 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:35.420 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:35.677 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:35.677 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:35.677 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:35.677 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:35.934 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:35.934 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:35.934 09:46:20 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:35.934 09:46:20 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:35.934 09:46:20 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:35.934 09:46:20 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:35.934 09:46:20 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:35.934 09:46:20 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:35.934 09:46:20 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:38.465 09:46:22 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:38.465 00:36:38.465 real 1m6.297s 00:36:38.465 user 6m23.334s 00:36:38.465 sys 0m20.795s 00:36:38.465 09:46:22 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:38.465 09:46:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:38.465 ************************************ 00:36:38.465 END TEST nvmf_dif 00:36:38.465 ************************************ 00:36:38.465 09:46:22 -- common/autotest_common.sh@1142 -- # return 0 00:36:38.465 09:46:22 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:38.465 09:46:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:38.465 09:46:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:38.465 09:46:22 -- common/autotest_common.sh@10 -- # set +x 00:36:38.465 ************************************ 00:36:38.465 START TEST nvmf_abort_qd_sizes 00:36:38.465 ************************************ 00:36:38.465 09:46:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:38.465 * Looking for test storage... 00:36:38.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:38.465 09:46:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:38.465 09:46:22 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:38.465 09:46:22 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:38.465 09:46:22 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:38.465 09:46:22 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:38.465 09:46:22 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:38.465 09:46:22 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:38.465 09:46:22 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:38.465 09:46:22 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:38.465 09:46:22 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:38.465 09:46:22 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:38.465 09:46:22 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:38.465 09:46:22 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:38.465 09:46:22 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:38.465 09:46:22 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:38.465 09:46:22 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:38.465 09:46:22 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:38.465 09:46:22 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:38.465 09:46:22 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:38.465 09:46:22 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:38.465 09:46:22 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:38.465 09:46:22 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:38.465 09:46:22 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:38.465 09:46:22 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:38.465 09:46:22 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:38.465 09:46:22 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:38.465 09:46:22 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:38.465 09:46:22 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:36:38.465 09:46:22 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:38.465 09:46:22 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:38.465 09:46:22 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:38.465 09:46:22 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:38.465 09:46:22 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:38.465 09:46:22 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:38.465 09:46:22 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:38.465 09:46:22 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:38.465 09:46:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:38.465 09:46:22 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:38.465 09:46:22 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:38.465 09:46:22 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:38.465 09:46:22 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:38.465 09:46:22 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:38.465 09:46:22 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:38.465 09:46:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:38.465 09:46:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:38.465 09:46:22 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:38.465 09:46:22 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:38.465 09:46:22 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:36:38.465 09:46:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:40.366 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:40.366 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:40.366 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:40.366 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:40.366 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:40.366 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:36:40.366 00:36:40.366 --- 10.0.0.2 ping statistics --- 00:36:40.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:40.366 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:40.366 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:40.366 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:36:40.366 00:36:40.366 --- 10.0.0.1 ping statistics --- 00:36:40.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:40.366 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:36:40.366 09:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:41.302 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:41.302 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:41.302 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:41.302 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:41.302 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:41.302 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:41.302 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:41.302 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:41.302 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:41.302 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:41.302 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:41.302 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:41.302 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:41.302 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:41.302 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:41.302 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:42.237 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:42.496 09:46:26 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:42.496 09:46:26 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:42.496 09:46:26 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:42.496 09:46:26 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:42.496 09:46:26 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:42.496 09:46:26 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:42.496 09:46:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:42.496 09:46:26 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:42.496 09:46:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:42.496 09:46:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:42.496 09:46:26 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=918935 00:36:42.496 09:46:26 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:42.496 09:46:26 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 918935 00:36:42.496 09:46:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 918935 ']' 00:36:42.496 09:46:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:42.496 09:46:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:42.496 09:46:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:42.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:42.496 09:46:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:42.496 09:46:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:42.496 [2024-07-14 09:46:26.813742] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:36:42.496 [2024-07-14 09:46:26.813817] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:42.496 EAL: No free 2048 kB hugepages reported on node 1 00:36:42.496 [2024-07-14 09:46:26.884700] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:42.755 [2024-07-14 09:46:26.985783] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:42.755 [2024-07-14 09:46:26.985839] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:42.755 [2024-07-14 09:46:26.985875] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:42.755 [2024-07-14 09:46:26.985888] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:42.755 [2024-07-14 09:46:26.985898] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:42.755 [2024-07-14 09:46:26.985981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:42.755 [2024-07-14 09:46:26.986005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:36:42.755 [2024-07-14 09:46:26.986028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:36:42.755 [2024-07-14 09:46:26.986031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:42.755 09:46:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:42.755 09:46:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:36:42.755 09:46:27 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:42.755 09:46:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:42.755 09:46:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:42.755 09:46:27 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:42.755 09:46:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:42.755 09:46:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:42.755 09:46:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:42.755 09:46:27 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:36:42.755 09:46:27 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:36:42.755 09:46:27 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:36:42.755 09:46:27 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:42.755 09:46:27 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:36:42.755 09:46:27 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:36:42.755 09:46:27 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:36:42.755 09:46:27 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:36:42.755 09:46:27 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:36:42.755 09:46:27 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:36:42.755 09:46:27 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:36:42.755 09:46:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:36:42.755 09:46:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:36:42.755 09:46:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:42.755 09:46:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:42.755 09:46:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:42.755 09:46:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:42.755 ************************************ 00:36:42.755 START TEST spdk_target_abort 00:36:42.755 ************************************ 00:36:42.755 09:46:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:36:42.755 09:46:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:42.755 09:46:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:36:42.755 09:46:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:42.755 09:46:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:46.036 spdk_targetn1 00:36:46.036 09:46:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.036 09:46:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:46.036 09:46:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.036 09:46:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:46.036 [2024-07-14 09:46:30.020958] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:46.036 09:46:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.036 09:46:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:46.036 09:46:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.036 09:46:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:46.036 09:46:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.036 09:46:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:46.036 09:46:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.036 09:46:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:46.036 09:46:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.036 09:46:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:46.036 09:46:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.036 09:46:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:46.036 [2024-07-14 09:46:30.053263] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:46.036 09:46:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.036 09:46:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:46.036 09:46:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:46.036 09:46:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:46.036 09:46:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:46.036 09:46:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:46.036 09:46:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:46.036 09:46:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:46.036 09:46:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:46.036 09:46:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:46.036 09:46:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:46.036 09:46:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:46.036 09:46:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:46.036 09:46:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:46.036 09:46:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:46.036 09:46:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:46.036 09:46:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:46.036 09:46:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:46.036 09:46:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:46.036 09:46:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:46.036 09:46:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:46.036 09:46:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:46.036 EAL: No free 2048 kB hugepages reported on node 1 00:36:49.316 Initializing NVMe Controllers 00:36:49.316 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:49.316 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:49.316 Initialization complete. Launching workers. 00:36:49.316 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 7173, failed: 0 00:36:49.316 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1600, failed to submit 5573 00:36:49.316 success 696, unsuccess 904, failed 0 00:36:49.316 09:46:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:49.316 09:46:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:49.316 EAL: No free 2048 kB hugepages reported on node 1 00:36:52.594 Initializing NVMe Controllers 00:36:52.594 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:52.594 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:52.594 Initialization complete. Launching workers. 00:36:52.594 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8600, failed: 0 00:36:52.594 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1216, failed to submit 7384 00:36:52.594 success 342, unsuccess 874, failed 0 00:36:52.594 09:46:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:52.594 09:46:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:52.594 EAL: No free 2048 kB hugepages reported on node 1 00:36:55.874 Initializing NVMe Controllers 00:36:55.874 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:55.874 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:55.874 Initialization complete. Launching workers. 00:36:55.874 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31248, failed: 0 00:36:55.874 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2739, failed to submit 28509 00:36:55.874 success 534, unsuccess 2205, failed 0 00:36:55.874 09:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:55.874 09:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:55.874 09:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:55.874 09:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:55.874 09:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:55.874 09:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:55.874 09:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:56.803 09:46:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:56.803 09:46:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 918935 00:36:56.803 09:46:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 918935 ']' 00:36:56.803 09:46:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 918935 00:36:56.803 09:46:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:36:56.803 09:46:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:56.803 09:46:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 918935 00:36:56.803 09:46:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:56.803 09:46:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:56.803 09:46:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 918935' 00:36:56.803 killing process with pid 918935 00:36:56.803 09:46:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 918935 00:36:56.803 09:46:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 918935 00:36:57.099 00:36:57.099 real 0m14.246s 00:36:57.099 user 0m53.135s 00:36:57.099 sys 0m2.899s 00:36:57.099 09:46:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:57.099 09:46:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:57.099 ************************************ 00:36:57.099 END TEST spdk_target_abort 00:36:57.099 ************************************ 00:36:57.099 09:46:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:36:57.099 09:46:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:57.099 09:46:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:57.099 09:46:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:57.099 09:46:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:57.099 ************************************ 00:36:57.099 START TEST kernel_target_abort 00:36:57.099 ************************************ 00:36:57.099 09:46:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:36:57.099 09:46:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:57.099 09:46:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:36:57.099 09:46:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:57.099 09:46:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:57.099 09:46:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:57.099 09:46:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:57.099 09:46:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:57.099 09:46:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:57.099 09:46:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:57.099 09:46:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:57.099 09:46:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:57.099 09:46:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:57.099 09:46:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:57.099 09:46:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:36:57.099 09:46:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:57.099 09:46:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:57.099 09:46:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:57.099 09:46:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:36:57.099 09:46:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:36:57.099 09:46:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:36:57.099 09:46:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:57.099 09:46:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:58.031 Waiting for block devices as requested 00:36:58.290 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:58.290 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:58.548 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:58.548 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:58.548 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:58.548 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:58.806 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:58.806 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:58.806 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:58.806 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:59.064 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:59.064 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:59.064 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:59.322 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:59.322 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:59.322 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:59.322 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:59.580 09:46:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:36:59.580 09:46:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:59.580 09:46:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:36:59.580 09:46:43 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:36:59.580 09:46:43 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:59.580 09:46:43 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:36:59.580 09:46:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:36:59.580 09:46:43 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:36:59.580 09:46:43 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:59.580 No valid GPT data, bailing 00:36:59.580 09:46:43 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:59.580 09:46:43 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:36:59.580 09:46:43 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:36:59.580 09:46:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:36:59.580 09:46:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:36:59.580 09:46:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:59.580 09:46:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:59.580 09:46:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:59.580 09:46:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:59.580 09:46:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:36:59.580 09:46:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:36:59.580 09:46:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:36:59.580 09:46:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:36:59.580 09:46:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:36:59.580 09:46:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:36:59.580 09:46:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:36:59.580 09:46:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:59.581 09:46:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:36:59.581 00:36:59.581 Discovery Log Number of Records 2, Generation counter 2 00:36:59.581 =====Discovery Log Entry 0====== 00:36:59.581 trtype: tcp 00:36:59.581 adrfam: ipv4 00:36:59.581 subtype: current discovery subsystem 00:36:59.581 treq: not specified, sq flow control disable supported 00:36:59.581 portid: 1 00:36:59.581 trsvcid: 4420 00:36:59.581 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:59.581 traddr: 10.0.0.1 00:36:59.581 eflags: none 00:36:59.581 sectype: none 00:36:59.581 =====Discovery Log Entry 1====== 00:36:59.581 trtype: tcp 00:36:59.581 adrfam: ipv4 00:36:59.581 subtype: nvme subsystem 00:36:59.581 treq: not specified, sq flow control disable supported 00:36:59.581 portid: 1 00:36:59.581 trsvcid: 4420 00:36:59.581 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:59.581 traddr: 10.0.0.1 00:36:59.581 eflags: none 00:36:59.581 sectype: none 00:36:59.581 09:46:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:59.581 09:46:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:59.581 09:46:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:59.581 09:46:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:59.581 09:46:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:59.581 09:46:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:59.581 09:46:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:59.581 09:46:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:59.581 09:46:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:59.581 09:46:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:59.581 09:46:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:59.581 09:46:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:59.581 09:46:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:59.581 09:46:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:59.581 09:46:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:59.581 09:46:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:59.581 09:46:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:59.581 09:46:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:59.581 09:46:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:59.581 09:46:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:59.581 09:46:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:59.581 EAL: No free 2048 kB hugepages reported on node 1 00:37:02.891 Initializing NVMe Controllers 00:37:02.891 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:02.891 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:02.891 Initialization complete. Launching workers. 00:37:02.891 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 27569, failed: 0 00:37:02.891 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27569, failed to submit 0 00:37:02.891 success 0, unsuccess 27569, failed 0 00:37:02.891 09:46:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:02.891 09:46:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:02.891 EAL: No free 2048 kB hugepages reported on node 1 00:37:06.170 Initializing NVMe Controllers 00:37:06.170 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:06.170 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:06.170 Initialization complete. Launching workers. 00:37:06.170 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 54734, failed: 0 00:37:06.170 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 13778, failed to submit 40956 00:37:06.170 success 0, unsuccess 13778, failed 0 00:37:06.170 09:46:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:06.170 09:46:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:06.170 EAL: No free 2048 kB hugepages reported on node 1 00:37:09.487 Initializing NVMe Controllers 00:37:09.487 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:09.487 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:09.487 Initialization complete. Launching workers. 00:37:09.487 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 53910, failed: 0 00:37:09.487 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 13438, failed to submit 40472 00:37:09.487 success 0, unsuccess 13438, failed 0 00:37:09.487 09:46:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:37:09.487 09:46:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:37:09.487 09:46:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:37:09.487 09:46:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:09.487 09:46:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:09.487 09:46:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:09.487 09:46:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:09.487 09:46:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:37:09.487 09:46:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:37:09.487 09:46:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:10.053 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:37:10.053 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:37:10.053 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:37:10.053 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:37:10.053 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:37:10.053 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:37:10.053 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:37:10.053 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:37:10.311 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:37:10.311 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:37:10.311 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:37:10.311 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:37:10.311 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:37:10.311 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:37:10.311 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:37:10.311 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:37:11.247 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:37:11.247 00:37:11.247 real 0m14.169s 00:37:11.247 user 0m4.467s 00:37:11.247 sys 0m3.327s 00:37:11.247 09:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:11.247 09:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:11.247 ************************************ 00:37:11.247 END TEST kernel_target_abort 00:37:11.247 ************************************ 00:37:11.247 09:46:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:37:11.247 09:46:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:37:11.247 09:46:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:37:11.247 09:46:55 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:11.247 09:46:55 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:37:11.247 09:46:55 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:11.247 09:46:55 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:37:11.247 09:46:55 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:11.247 09:46:55 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:11.247 rmmod nvme_tcp 00:37:11.505 rmmod nvme_fabrics 00:37:11.505 rmmod nvme_keyring 00:37:11.505 09:46:55 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:11.505 09:46:55 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:37:11.505 09:46:55 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:37:11.505 09:46:55 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 918935 ']' 00:37:11.505 09:46:55 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 918935 00:37:11.505 09:46:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 918935 ']' 00:37:11.505 09:46:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 918935 00:37:11.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (918935) - No such process 00:37:11.505 09:46:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 918935 is not found' 00:37:11.505 Process with pid 918935 is not found 00:37:11.505 09:46:55 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:37:11.505 09:46:55 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:12.440 Waiting for block devices as requested 00:37:12.440 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:37:12.699 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:37:12.699 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:37:12.699 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:37:12.957 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:37:12.957 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:37:12.957 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:37:12.957 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:37:13.216 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:37:13.216 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:37:13.216 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:37:13.216 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:37:13.475 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:37:13.475 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:37:13.475 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:37:13.475 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:37:13.733 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:37:13.733 09:46:58 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:13.733 09:46:58 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:13.733 09:46:58 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:13.733 09:46:58 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:13.733 09:46:58 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:13.733 09:46:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:13.733 09:46:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:16.264 09:47:00 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:16.264 00:37:16.264 real 0m37.691s 00:37:16.264 user 0m59.630s 00:37:16.264 sys 0m9.474s 00:37:16.264 09:47:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:16.264 09:47:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:16.264 ************************************ 00:37:16.264 END TEST nvmf_abort_qd_sizes 00:37:16.264 ************************************ 00:37:16.264 09:47:00 -- common/autotest_common.sh@1142 -- # return 0 00:37:16.264 09:47:00 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:16.264 09:47:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:16.264 09:47:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:16.264 09:47:00 -- common/autotest_common.sh@10 -- # set +x 00:37:16.264 ************************************ 00:37:16.264 START TEST keyring_file 00:37:16.264 ************************************ 00:37:16.264 09:47:00 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:16.264 * Looking for test storage... 00:37:16.264 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:16.264 09:47:00 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:16.264 09:47:00 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:16.264 09:47:00 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:37:16.264 09:47:00 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:16.264 09:47:00 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:16.264 09:47:00 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:16.264 09:47:00 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:16.264 09:47:00 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:16.264 09:47:00 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:16.264 09:47:00 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:16.264 09:47:00 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:16.264 09:47:00 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:16.264 09:47:00 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:16.264 09:47:00 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:16.264 09:47:00 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:16.264 09:47:00 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:16.264 09:47:00 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:16.264 09:47:00 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:16.264 09:47:00 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:16.264 09:47:00 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:16.264 09:47:00 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:16.264 09:47:00 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:16.264 09:47:00 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:16.264 09:47:00 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:16.264 09:47:00 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:16.264 09:47:00 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:16.264 09:47:00 keyring_file -- paths/export.sh@5 -- # export PATH 00:37:16.264 09:47:00 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:16.264 09:47:00 keyring_file -- nvmf/common.sh@47 -- # : 0 00:37:16.264 09:47:00 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:16.264 09:47:00 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:16.264 09:47:00 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:16.264 09:47:00 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:16.264 09:47:00 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:16.264 09:47:00 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:16.264 09:47:00 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:16.264 09:47:00 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:16.264 09:47:00 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:16.264 09:47:00 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:16.264 09:47:00 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:16.264 09:47:00 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:37:16.264 09:47:00 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:37:16.264 09:47:00 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:37:16.264 09:47:00 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:16.264 09:47:00 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:16.264 09:47:00 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:16.264 09:47:00 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:16.264 09:47:00 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:16.264 09:47:00 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:16.264 09:47:00 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.IwoGvJyGFC 00:37:16.265 09:47:00 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:16.265 09:47:00 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:16.265 09:47:00 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:16.265 09:47:00 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:16.265 09:47:00 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:16.265 09:47:00 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:16.265 09:47:00 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:16.265 09:47:00 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.IwoGvJyGFC 00:37:16.265 09:47:00 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.IwoGvJyGFC 00:37:16.265 09:47:00 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.IwoGvJyGFC 00:37:16.265 09:47:00 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:37:16.265 09:47:00 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:16.265 09:47:00 keyring_file -- keyring/common.sh@17 -- # name=key1 00:37:16.265 09:47:00 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:16.265 09:47:00 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:16.265 09:47:00 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:16.265 09:47:00 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ajnDJazEpN 00:37:16.265 09:47:00 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:16.265 09:47:00 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:16.265 09:47:00 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:16.265 09:47:00 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:16.265 09:47:00 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:37:16.265 09:47:00 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:16.265 09:47:00 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:16.265 09:47:00 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ajnDJazEpN 00:37:16.265 09:47:00 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ajnDJazEpN 00:37:16.265 09:47:00 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.ajnDJazEpN 00:37:16.265 09:47:00 keyring_file -- keyring/file.sh@30 -- # tgtpid=924733 00:37:16.265 09:47:00 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:16.265 09:47:00 keyring_file -- keyring/file.sh@32 -- # waitforlisten 924733 00:37:16.265 09:47:00 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 924733 ']' 00:37:16.265 09:47:00 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:16.265 09:47:00 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:16.265 09:47:00 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:16.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:16.265 09:47:00 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:16.265 09:47:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:16.265 [2024-07-14 09:47:00.420770] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:37:16.265 [2024-07-14 09:47:00.420853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid924733 ] 00:37:16.265 EAL: No free 2048 kB hugepages reported on node 1 00:37:16.265 [2024-07-14 09:47:00.482160] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:16.265 [2024-07-14 09:47:00.577614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:16.523 09:47:00 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:16.523 09:47:00 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:37:16.523 09:47:00 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:37:16.523 09:47:00 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:16.523 09:47:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:16.523 [2024-07-14 09:47:00.842878] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:16.523 null0 00:37:16.523 [2024-07-14 09:47:00.874944] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:16.523 [2024-07-14 09:47:00.875443] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:16.523 [2024-07-14 09:47:00.882953] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:37:16.523 09:47:00 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:16.523 09:47:00 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:16.523 09:47:00 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:16.523 09:47:00 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:16.523 09:47:00 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:37:16.523 09:47:00 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:16.523 09:47:00 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:37:16.523 09:47:00 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:16.523 09:47:00 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:16.523 09:47:00 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:16.523 09:47:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:16.523 [2024-07-14 09:47:00.890971] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:37:16.523 request: 00:37:16.523 { 00:37:16.523 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:37:16.523 "secure_channel": false, 00:37:16.523 "listen_address": { 00:37:16.523 "trtype": "tcp", 00:37:16.523 "traddr": "127.0.0.1", 00:37:16.523 "trsvcid": "4420" 00:37:16.523 }, 00:37:16.523 "method": "nvmf_subsystem_add_listener", 00:37:16.523 "req_id": 1 00:37:16.523 } 00:37:16.523 Got JSON-RPC error response 00:37:16.523 response: 00:37:16.523 { 00:37:16.523 "code": -32602, 00:37:16.523 "message": "Invalid parameters" 00:37:16.523 } 00:37:16.523 09:47:00 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:37:16.523 09:47:00 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:16.523 09:47:00 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:16.523 09:47:00 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:16.523 09:47:00 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:16.523 09:47:00 keyring_file -- keyring/file.sh@46 -- # bperfpid=924752 00:37:16.523 09:47:00 keyring_file -- keyring/file.sh@48 -- # waitforlisten 924752 /var/tmp/bperf.sock 00:37:16.523 09:47:00 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:37:16.523 09:47:00 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 924752 ']' 00:37:16.523 09:47:00 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:16.523 09:47:00 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:16.523 09:47:00 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:16.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:16.523 09:47:00 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:16.523 09:47:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:16.523 [2024-07-14 09:47:00.934402] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:37:16.523 [2024-07-14 09:47:00.934481] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid924752 ] 00:37:16.523 EAL: No free 2048 kB hugepages reported on node 1 00:37:16.781 [2024-07-14 09:47:01.002805] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:16.781 [2024-07-14 09:47:01.096612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:16.781 09:47:01 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:16.781 09:47:01 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:37:16.781 09:47:01 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.IwoGvJyGFC 00:37:16.781 09:47:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.IwoGvJyGFC 00:37:17.039 09:47:01 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.ajnDJazEpN 00:37:17.039 09:47:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.ajnDJazEpN 00:37:17.297 09:47:01 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:37:17.297 09:47:01 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:37:17.297 09:47:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:17.297 09:47:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:17.297 09:47:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:17.555 09:47:01 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.IwoGvJyGFC == \/\t\m\p\/\t\m\p\.\I\w\o\G\v\J\y\G\F\C ]] 00:37:17.555 09:47:01 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:37:17.555 09:47:01 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:37:17.555 09:47:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:17.555 09:47:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:17.555 09:47:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:17.813 09:47:02 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.ajnDJazEpN == \/\t\m\p\/\t\m\p\.\a\j\n\D\J\a\z\E\p\N ]] 00:37:17.813 09:47:02 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:37:17.813 09:47:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:17.813 09:47:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:17.813 09:47:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:17.813 09:47:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:17.813 09:47:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:18.071 09:47:02 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:37:18.071 09:47:02 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:37:18.071 09:47:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:18.071 09:47:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:18.071 09:47:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:18.071 09:47:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:18.071 09:47:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:18.329 09:47:02 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:37:18.329 09:47:02 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:18.329 09:47:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:18.587 [2024-07-14 09:47:02.933288] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:18.587 nvme0n1 00:37:18.587 09:47:03 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:37:18.587 09:47:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:18.587 09:47:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:18.587 09:47:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:18.587 09:47:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:18.587 09:47:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:18.845 09:47:03 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:37:18.845 09:47:03 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:37:18.845 09:47:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:18.845 09:47:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:18.845 09:47:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:18.845 09:47:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:18.845 09:47:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:19.102 09:47:03 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:37:19.102 09:47:03 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:19.359 Running I/O for 1 seconds... 00:37:20.291 00:37:20.291 Latency(us) 00:37:20.291 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:20.291 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:37:20.291 nvme0n1 : 1.03 4149.60 16.21 0.00 0.00 30396.84 7815.77 44079.03 00:37:20.291 =================================================================================================================== 00:37:20.291 Total : 4149.60 16.21 0.00 0.00 30396.84 7815.77 44079.03 00:37:20.291 0 00:37:20.291 09:47:04 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:20.291 09:47:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:20.549 09:47:04 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:37:20.549 09:47:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:20.549 09:47:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:20.549 09:47:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:20.549 09:47:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:20.549 09:47:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:20.806 09:47:05 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:37:20.806 09:47:05 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:37:20.806 09:47:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:20.806 09:47:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:20.806 09:47:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:20.806 09:47:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:20.806 09:47:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:21.064 09:47:05 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:37:21.064 09:47:05 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:21.064 09:47:05 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:21.064 09:47:05 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:21.064 09:47:05 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:21.064 09:47:05 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:21.064 09:47:05 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:21.064 09:47:05 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:21.065 09:47:05 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:21.065 09:47:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:21.322 [2024-07-14 09:47:05.671329] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:21.322 [2024-07-14 09:47:05.671819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc61710 (107): Transport endpoint is not connected 00:37:21.322 [2024-07-14 09:47:05.672807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc61710 (9): Bad file descriptor 00:37:21.322 [2024-07-14 09:47:05.673805] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:21.322 [2024-07-14 09:47:05.673828] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:21.322 [2024-07-14 09:47:05.673852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:21.322 request: 00:37:21.322 { 00:37:21.322 "name": "nvme0", 00:37:21.322 "trtype": "tcp", 00:37:21.322 "traddr": "127.0.0.1", 00:37:21.322 "adrfam": "ipv4", 00:37:21.322 "trsvcid": "4420", 00:37:21.322 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:21.323 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:21.323 "prchk_reftag": false, 00:37:21.323 "prchk_guard": false, 00:37:21.323 "hdgst": false, 00:37:21.323 "ddgst": false, 00:37:21.323 "psk": "key1", 00:37:21.323 "method": "bdev_nvme_attach_controller", 00:37:21.323 "req_id": 1 00:37:21.323 } 00:37:21.323 Got JSON-RPC error response 00:37:21.323 response: 00:37:21.323 { 00:37:21.323 "code": -5, 00:37:21.323 "message": "Input/output error" 00:37:21.323 } 00:37:21.323 09:47:05 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:21.323 09:47:05 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:21.323 09:47:05 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:21.323 09:47:05 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:21.323 09:47:05 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:37:21.323 09:47:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:21.323 09:47:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:21.323 09:47:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:21.323 09:47:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:21.323 09:47:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:21.580 09:47:05 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:37:21.580 09:47:05 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:37:21.580 09:47:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:21.580 09:47:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:21.580 09:47:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:21.580 09:47:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:21.580 09:47:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:21.837 09:47:06 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:37:21.837 09:47:06 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:37:21.837 09:47:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:22.094 09:47:06 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:37:22.094 09:47:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:37:22.352 09:47:06 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:37:22.352 09:47:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:22.352 09:47:06 keyring_file -- keyring/file.sh@77 -- # jq length 00:37:22.610 09:47:06 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:37:22.610 09:47:06 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.IwoGvJyGFC 00:37:22.610 09:47:06 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.IwoGvJyGFC 00:37:22.610 09:47:06 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:22.610 09:47:06 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.IwoGvJyGFC 00:37:22.610 09:47:06 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:22.610 09:47:06 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:22.610 09:47:06 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:22.610 09:47:06 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:22.610 09:47:06 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.IwoGvJyGFC 00:37:22.610 09:47:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.IwoGvJyGFC 00:37:22.874 [2024-07-14 09:47:07.167670] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.IwoGvJyGFC': 0100660 00:37:22.874 [2024-07-14 09:47:07.167717] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:37:22.874 request: 00:37:22.874 { 00:37:22.874 "name": "key0", 00:37:22.874 "path": "/tmp/tmp.IwoGvJyGFC", 00:37:22.874 "method": "keyring_file_add_key", 00:37:22.874 "req_id": 1 00:37:22.874 } 00:37:22.874 Got JSON-RPC error response 00:37:22.874 response: 00:37:22.874 { 00:37:22.874 "code": -1, 00:37:22.874 "message": "Operation not permitted" 00:37:22.874 } 00:37:22.874 09:47:07 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:22.874 09:47:07 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:22.874 09:47:07 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:22.874 09:47:07 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:22.874 09:47:07 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.IwoGvJyGFC 00:37:22.874 09:47:07 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.IwoGvJyGFC 00:37:22.874 09:47:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.IwoGvJyGFC 00:37:23.180 09:47:07 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.IwoGvJyGFC 00:37:23.180 09:47:07 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:37:23.180 09:47:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:23.180 09:47:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:23.180 09:47:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:23.180 09:47:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:23.180 09:47:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:23.442 09:47:07 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:37:23.442 09:47:07 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:23.442 09:47:07 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:23.442 09:47:07 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:23.442 09:47:07 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:23.442 09:47:07 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:23.442 09:47:07 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:23.442 09:47:07 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:23.442 09:47:07 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:23.442 09:47:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:23.699 [2024-07-14 09:47:07.937797] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.IwoGvJyGFC': No such file or directory 00:37:23.699 [2024-07-14 09:47:07.937859] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:37:23.699 [2024-07-14 09:47:07.937894] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:37:23.699 [2024-07-14 09:47:07.937907] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:23.699 [2024-07-14 09:47:07.937920] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:37:23.699 request: 00:37:23.699 { 00:37:23.699 "name": "nvme0", 00:37:23.699 "trtype": "tcp", 00:37:23.699 "traddr": "127.0.0.1", 00:37:23.699 "adrfam": "ipv4", 00:37:23.699 "trsvcid": "4420", 00:37:23.699 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:23.699 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:23.699 "prchk_reftag": false, 00:37:23.699 "prchk_guard": false, 00:37:23.699 "hdgst": false, 00:37:23.699 "ddgst": false, 00:37:23.699 "psk": "key0", 00:37:23.699 "method": "bdev_nvme_attach_controller", 00:37:23.699 "req_id": 1 00:37:23.699 } 00:37:23.699 Got JSON-RPC error response 00:37:23.699 response: 00:37:23.699 { 00:37:23.699 "code": -19, 00:37:23.699 "message": "No such device" 00:37:23.699 } 00:37:23.699 09:47:07 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:23.699 09:47:07 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:23.699 09:47:07 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:23.699 09:47:07 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:23.699 09:47:07 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:37:23.699 09:47:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:23.956 09:47:08 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:23.956 09:47:08 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:23.956 09:47:08 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:23.956 09:47:08 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:23.956 09:47:08 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:23.956 09:47:08 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:23.956 09:47:08 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Khi789DA20 00:37:23.956 09:47:08 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:23.956 09:47:08 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:23.956 09:47:08 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:23.956 09:47:08 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:23.956 09:47:08 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:23.956 09:47:08 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:23.956 09:47:08 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:23.956 09:47:08 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Khi789DA20 00:37:23.956 09:47:08 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Khi789DA20 00:37:23.956 09:47:08 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.Khi789DA20 00:37:23.956 09:47:08 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Khi789DA20 00:37:23.956 09:47:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Khi789DA20 00:37:24.214 09:47:08 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:24.214 09:47:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:24.472 nvme0n1 00:37:24.472 09:47:08 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:37:24.472 09:47:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:24.472 09:47:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:24.472 09:47:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:24.472 09:47:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:24.472 09:47:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:24.731 09:47:09 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:37:24.731 09:47:09 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:37:24.731 09:47:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:24.990 09:47:09 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:37:24.990 09:47:09 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:37:24.990 09:47:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:24.990 09:47:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:24.990 09:47:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:25.248 09:47:09 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:37:25.248 09:47:09 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:37:25.248 09:47:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:25.248 09:47:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:25.248 09:47:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:25.248 09:47:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:25.248 09:47:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:25.506 09:47:09 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:37:25.506 09:47:09 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:25.506 09:47:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:25.765 09:47:10 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:37:25.765 09:47:10 keyring_file -- keyring/file.sh@104 -- # jq length 00:37:25.765 09:47:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:26.023 09:47:10 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:37:26.023 09:47:10 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Khi789DA20 00:37:26.023 09:47:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Khi789DA20 00:37:26.281 09:47:10 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.ajnDJazEpN 00:37:26.281 09:47:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.ajnDJazEpN 00:37:26.539 09:47:10 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:26.539 09:47:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:26.797 nvme0n1 00:37:26.797 09:47:11 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:37:26.797 09:47:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:37:27.055 09:47:11 keyring_file -- keyring/file.sh@112 -- # config='{ 00:37:27.055 "subsystems": [ 00:37:27.055 { 00:37:27.055 "subsystem": "keyring", 00:37:27.055 "config": [ 00:37:27.055 { 00:37:27.055 "method": "keyring_file_add_key", 00:37:27.055 "params": { 00:37:27.055 "name": "key0", 00:37:27.055 "path": "/tmp/tmp.Khi789DA20" 00:37:27.055 } 00:37:27.055 }, 00:37:27.055 { 00:37:27.055 "method": "keyring_file_add_key", 00:37:27.055 "params": { 00:37:27.055 "name": "key1", 00:37:27.055 "path": "/tmp/tmp.ajnDJazEpN" 00:37:27.055 } 00:37:27.055 } 00:37:27.055 ] 00:37:27.055 }, 00:37:27.055 { 00:37:27.055 "subsystem": "iobuf", 00:37:27.055 "config": [ 00:37:27.055 { 00:37:27.055 "method": "iobuf_set_options", 00:37:27.055 "params": { 00:37:27.055 "small_pool_count": 8192, 00:37:27.055 "large_pool_count": 1024, 00:37:27.055 "small_bufsize": 8192, 00:37:27.055 "large_bufsize": 135168 00:37:27.055 } 00:37:27.055 } 00:37:27.055 ] 00:37:27.055 }, 00:37:27.055 { 00:37:27.055 "subsystem": "sock", 00:37:27.055 "config": [ 00:37:27.055 { 00:37:27.055 "method": "sock_set_default_impl", 00:37:27.055 "params": { 00:37:27.055 "impl_name": "posix" 00:37:27.055 } 00:37:27.055 }, 00:37:27.055 { 00:37:27.055 "method": "sock_impl_set_options", 00:37:27.055 "params": { 00:37:27.055 "impl_name": "ssl", 00:37:27.055 "recv_buf_size": 4096, 00:37:27.055 "send_buf_size": 4096, 00:37:27.055 "enable_recv_pipe": true, 00:37:27.055 "enable_quickack": false, 00:37:27.055 "enable_placement_id": 0, 00:37:27.055 "enable_zerocopy_send_server": true, 00:37:27.055 "enable_zerocopy_send_client": false, 00:37:27.055 "zerocopy_threshold": 0, 00:37:27.055 "tls_version": 0, 00:37:27.055 "enable_ktls": false 00:37:27.055 } 00:37:27.055 }, 00:37:27.055 { 00:37:27.055 "method": "sock_impl_set_options", 00:37:27.055 "params": { 00:37:27.055 "impl_name": "posix", 00:37:27.055 "recv_buf_size": 2097152, 00:37:27.055 "send_buf_size": 2097152, 00:37:27.055 "enable_recv_pipe": true, 00:37:27.055 "enable_quickack": false, 00:37:27.055 "enable_placement_id": 0, 00:37:27.055 "enable_zerocopy_send_server": true, 00:37:27.055 "enable_zerocopy_send_client": false, 00:37:27.055 "zerocopy_threshold": 0, 00:37:27.055 "tls_version": 0, 00:37:27.055 "enable_ktls": false 00:37:27.055 } 00:37:27.055 } 00:37:27.055 ] 00:37:27.055 }, 00:37:27.055 { 00:37:27.055 "subsystem": "vmd", 00:37:27.055 "config": [] 00:37:27.055 }, 00:37:27.055 { 00:37:27.055 "subsystem": "accel", 00:37:27.055 "config": [ 00:37:27.055 { 00:37:27.055 "method": "accel_set_options", 00:37:27.055 "params": { 00:37:27.055 "small_cache_size": 128, 00:37:27.055 "large_cache_size": 16, 00:37:27.055 "task_count": 2048, 00:37:27.055 "sequence_count": 2048, 00:37:27.055 "buf_count": 2048 00:37:27.055 } 00:37:27.055 } 00:37:27.055 ] 00:37:27.055 }, 00:37:27.055 { 00:37:27.055 "subsystem": "bdev", 00:37:27.055 "config": [ 00:37:27.055 { 00:37:27.055 "method": "bdev_set_options", 00:37:27.055 "params": { 00:37:27.055 "bdev_io_pool_size": 65535, 00:37:27.055 "bdev_io_cache_size": 256, 00:37:27.055 "bdev_auto_examine": true, 00:37:27.055 "iobuf_small_cache_size": 128, 00:37:27.055 "iobuf_large_cache_size": 16 00:37:27.055 } 00:37:27.055 }, 00:37:27.055 { 00:37:27.055 "method": "bdev_raid_set_options", 00:37:27.055 "params": { 00:37:27.055 "process_window_size_kb": 1024 00:37:27.055 } 00:37:27.055 }, 00:37:27.055 { 00:37:27.055 "method": "bdev_iscsi_set_options", 00:37:27.055 "params": { 00:37:27.055 "timeout_sec": 30 00:37:27.055 } 00:37:27.055 }, 00:37:27.055 { 00:37:27.055 "method": "bdev_nvme_set_options", 00:37:27.055 "params": { 00:37:27.055 "action_on_timeout": "none", 00:37:27.055 "timeout_us": 0, 00:37:27.055 "timeout_admin_us": 0, 00:37:27.055 "keep_alive_timeout_ms": 10000, 00:37:27.055 "arbitration_burst": 0, 00:37:27.055 "low_priority_weight": 0, 00:37:27.055 "medium_priority_weight": 0, 00:37:27.055 "high_priority_weight": 0, 00:37:27.055 "nvme_adminq_poll_period_us": 10000, 00:37:27.055 "nvme_ioq_poll_period_us": 0, 00:37:27.055 "io_queue_requests": 512, 00:37:27.055 "delay_cmd_submit": true, 00:37:27.055 "transport_retry_count": 4, 00:37:27.055 "bdev_retry_count": 3, 00:37:27.055 "transport_ack_timeout": 0, 00:37:27.055 "ctrlr_loss_timeout_sec": 0, 00:37:27.055 "reconnect_delay_sec": 0, 00:37:27.055 "fast_io_fail_timeout_sec": 0, 00:37:27.055 "disable_auto_failback": false, 00:37:27.055 "generate_uuids": false, 00:37:27.055 "transport_tos": 0, 00:37:27.055 "nvme_error_stat": false, 00:37:27.055 "rdma_srq_size": 0, 00:37:27.055 "io_path_stat": false, 00:37:27.055 "allow_accel_sequence": false, 00:37:27.055 "rdma_max_cq_size": 0, 00:37:27.055 "rdma_cm_event_timeout_ms": 0, 00:37:27.055 "dhchap_digests": [ 00:37:27.055 "sha256", 00:37:27.055 "sha384", 00:37:27.055 "sha512" 00:37:27.055 ], 00:37:27.055 "dhchap_dhgroups": [ 00:37:27.055 "null", 00:37:27.055 "ffdhe2048", 00:37:27.055 "ffdhe3072", 00:37:27.055 "ffdhe4096", 00:37:27.055 "ffdhe6144", 00:37:27.056 "ffdhe8192" 00:37:27.056 ] 00:37:27.056 } 00:37:27.056 }, 00:37:27.056 { 00:37:27.056 "method": "bdev_nvme_attach_controller", 00:37:27.056 "params": { 00:37:27.056 "name": "nvme0", 00:37:27.056 "trtype": "TCP", 00:37:27.056 "adrfam": "IPv4", 00:37:27.056 "traddr": "127.0.0.1", 00:37:27.056 "trsvcid": "4420", 00:37:27.056 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:27.056 "prchk_reftag": false, 00:37:27.056 "prchk_guard": false, 00:37:27.056 "ctrlr_loss_timeout_sec": 0, 00:37:27.056 "reconnect_delay_sec": 0, 00:37:27.056 "fast_io_fail_timeout_sec": 0, 00:37:27.056 "psk": "key0", 00:37:27.056 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:27.056 "hdgst": false, 00:37:27.056 "ddgst": false 00:37:27.056 } 00:37:27.056 }, 00:37:27.056 { 00:37:27.056 "method": "bdev_nvme_set_hotplug", 00:37:27.056 "params": { 00:37:27.056 "period_us": 100000, 00:37:27.056 "enable": false 00:37:27.056 } 00:37:27.056 }, 00:37:27.056 { 00:37:27.056 "method": "bdev_wait_for_examine" 00:37:27.056 } 00:37:27.056 ] 00:37:27.056 }, 00:37:27.056 { 00:37:27.056 "subsystem": "nbd", 00:37:27.056 "config": [] 00:37:27.056 } 00:37:27.056 ] 00:37:27.056 }' 00:37:27.056 09:47:11 keyring_file -- keyring/file.sh@114 -- # killprocess 924752 00:37:27.056 09:47:11 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 924752 ']' 00:37:27.056 09:47:11 keyring_file -- common/autotest_common.sh@952 -- # kill -0 924752 00:37:27.056 09:47:11 keyring_file -- common/autotest_common.sh@953 -- # uname 00:37:27.056 09:47:11 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:27.056 09:47:11 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 924752 00:37:27.056 09:47:11 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:27.056 09:47:11 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:27.056 09:47:11 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 924752' 00:37:27.056 killing process with pid 924752 00:37:27.056 09:47:11 keyring_file -- common/autotest_common.sh@967 -- # kill 924752 00:37:27.056 Received shutdown signal, test time was about 1.000000 seconds 00:37:27.056 00:37:27.056 Latency(us) 00:37:27.056 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:27.056 =================================================================================================================== 00:37:27.056 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:27.056 09:47:11 keyring_file -- common/autotest_common.sh@972 -- # wait 924752 00:37:27.314 09:47:11 keyring_file -- keyring/file.sh@117 -- # bperfpid=926199 00:37:27.314 09:47:11 keyring_file -- keyring/file.sh@119 -- # waitforlisten 926199 /var/tmp/bperf.sock 00:37:27.314 09:47:11 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 926199 ']' 00:37:27.314 09:47:11 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:27.314 09:47:11 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:37:27.314 09:47:11 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:27.314 09:47:11 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:27.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:27.314 09:47:11 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:37:27.314 "subsystems": [ 00:37:27.314 { 00:37:27.314 "subsystem": "keyring", 00:37:27.314 "config": [ 00:37:27.314 { 00:37:27.314 "method": "keyring_file_add_key", 00:37:27.314 "params": { 00:37:27.314 "name": "key0", 00:37:27.314 "path": "/tmp/tmp.Khi789DA20" 00:37:27.314 } 00:37:27.314 }, 00:37:27.314 { 00:37:27.314 "method": "keyring_file_add_key", 00:37:27.314 "params": { 00:37:27.314 "name": "key1", 00:37:27.314 "path": "/tmp/tmp.ajnDJazEpN" 00:37:27.314 } 00:37:27.314 } 00:37:27.314 ] 00:37:27.314 }, 00:37:27.314 { 00:37:27.314 "subsystem": "iobuf", 00:37:27.314 "config": [ 00:37:27.314 { 00:37:27.314 "method": "iobuf_set_options", 00:37:27.314 "params": { 00:37:27.314 "small_pool_count": 8192, 00:37:27.314 "large_pool_count": 1024, 00:37:27.314 "small_bufsize": 8192, 00:37:27.314 "large_bufsize": 135168 00:37:27.314 } 00:37:27.314 } 00:37:27.314 ] 00:37:27.314 }, 00:37:27.314 { 00:37:27.314 "subsystem": "sock", 00:37:27.314 "config": [ 00:37:27.314 { 00:37:27.314 "method": "sock_set_default_impl", 00:37:27.314 "params": { 00:37:27.314 "impl_name": "posix" 00:37:27.314 } 00:37:27.314 }, 00:37:27.314 { 00:37:27.314 "method": "sock_impl_set_options", 00:37:27.314 "params": { 00:37:27.314 "impl_name": "ssl", 00:37:27.314 "recv_buf_size": 4096, 00:37:27.314 "send_buf_size": 4096, 00:37:27.314 "enable_recv_pipe": true, 00:37:27.314 "enable_quickack": false, 00:37:27.314 "enable_placement_id": 0, 00:37:27.314 "enable_zerocopy_send_server": true, 00:37:27.314 "enable_zerocopy_send_client": false, 00:37:27.314 "zerocopy_threshold": 0, 00:37:27.314 "tls_version": 0, 00:37:27.314 "enable_ktls": false 00:37:27.314 } 00:37:27.314 }, 00:37:27.314 { 00:37:27.314 "method": "sock_impl_set_options", 00:37:27.314 "params": { 00:37:27.314 "impl_name": "posix", 00:37:27.314 "recv_buf_size": 2097152, 00:37:27.314 "send_buf_size": 2097152, 00:37:27.314 "enable_recv_pipe": true, 00:37:27.314 "enable_quickack": false, 00:37:27.314 "enable_placement_id": 0, 00:37:27.314 "enable_zerocopy_send_server": true, 00:37:27.314 "enable_zerocopy_send_client": false, 00:37:27.314 "zerocopy_threshold": 0, 00:37:27.314 "tls_version": 0, 00:37:27.314 "enable_ktls": false 00:37:27.314 } 00:37:27.314 } 00:37:27.314 ] 00:37:27.314 }, 00:37:27.314 { 00:37:27.314 "subsystem": "vmd", 00:37:27.314 "config": [] 00:37:27.314 }, 00:37:27.314 { 00:37:27.314 "subsystem": "accel", 00:37:27.314 "config": [ 00:37:27.314 { 00:37:27.314 "method": "accel_set_options", 00:37:27.314 "params": { 00:37:27.314 "small_cache_size": 128, 00:37:27.314 "large_cache_size": 16, 00:37:27.314 "task_count": 2048, 00:37:27.314 "sequence_count": 2048, 00:37:27.314 "buf_count": 2048 00:37:27.314 } 00:37:27.314 } 00:37:27.314 ] 00:37:27.314 }, 00:37:27.314 { 00:37:27.314 "subsystem": "bdev", 00:37:27.314 "config": [ 00:37:27.314 { 00:37:27.314 "method": "bdev_set_options", 00:37:27.314 "params": { 00:37:27.314 "bdev_io_pool_size": 65535, 00:37:27.314 "bdev_io_cache_size": 256, 00:37:27.314 "bdev_auto_examine": true, 00:37:27.314 "iobuf_small_cache_size": 128, 00:37:27.314 "iobuf_large_cache_size": 16 00:37:27.315 } 00:37:27.315 }, 00:37:27.315 { 00:37:27.315 "method": "bdev_raid_set_options", 00:37:27.315 "params": { 00:37:27.315 "process_window_size_kb": 1024 00:37:27.315 } 00:37:27.315 }, 00:37:27.315 { 00:37:27.315 "method": "bdev_iscsi_set_options", 00:37:27.315 "params": { 00:37:27.315 "timeout_sec": 30 00:37:27.315 } 00:37:27.315 }, 00:37:27.315 { 00:37:27.315 "method": "bdev_nvme_set_options", 00:37:27.315 "params": { 00:37:27.315 "action_on_timeout": "none", 00:37:27.315 "timeout_us": 0, 00:37:27.315 "timeout_admin_us": 0, 00:37:27.315 "keep_alive_timeout_ms": 10000, 00:37:27.315 "arbitration_burst": 0, 00:37:27.315 "low_priority_weight": 0, 00:37:27.315 "medium_priority_weight": 0, 00:37:27.315 "high_priority_weight": 0, 00:37:27.315 "nvme_adminq_poll_period_us": 10000, 00:37:27.315 "nvme_ioq_poll_period_us": 0, 00:37:27.315 "io_queue_requests": 512, 00:37:27.315 "delay_cmd_submit": true, 00:37:27.315 "transport_retry_count": 4, 00:37:27.315 "bdev_retry_count": 3, 00:37:27.315 "transport_ack_timeout": 0, 00:37:27.315 "ctrlr_loss_timeout_sec": 0, 00:37:27.315 "reconnect_delay_sec": 0, 00:37:27.315 "fast_io_fail_timeout_sec": 0, 00:37:27.315 "disable_auto_failback": false, 00:37:27.315 "generate_uuids": false, 00:37:27.315 "transport_tos": 0, 00:37:27.315 "nvme_error_stat": false, 00:37:27.315 "rdma_srq_size": 0, 00:37:27.315 "io_path_stat": false, 00:37:27.315 "allow_accel_sequence": false, 00:37:27.315 "rdma_max_cq_size": 0, 00:37:27.315 "rdma_cm_event_timeout_ms": 0, 00:37:27.315 "dhchap_digests": [ 00:37:27.315 "sha256", 00:37:27.315 "sha384", 00:37:27.315 "sha512" 00:37:27.315 ], 00:37:27.315 "dhchap_dhgroups": [ 00:37:27.315 "null", 00:37:27.315 "ffdhe2048", 00:37:27.315 "ffdhe3072", 00:37:27.315 "ffdhe4096", 00:37:27.315 "ffdhe6144", 00:37:27.315 "ffdhe8192" 00:37:27.315 ] 00:37:27.315 } 00:37:27.315 }, 00:37:27.315 { 00:37:27.315 "method": "bdev_nvme_attach_controller", 00:37:27.315 "params": { 00:37:27.315 "name": "nvme0", 00:37:27.315 "trtype": "TCP", 00:37:27.315 "adrfam": "IPv4", 00:37:27.315 "traddr": "127.0.0.1", 00:37:27.315 "trsvcid": "4420", 00:37:27.315 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:27.315 "prchk_reftag": false, 00:37:27.315 "prchk_guard": false, 00:37:27.315 "ctrlr_loss_timeout_sec": 0, 00:37:27.315 "reconnect_delay_sec": 0, 00:37:27.315 "fast_io_fail_timeout_sec": 0, 00:37:27.315 "psk": "key0", 00:37:27.315 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:27.315 "hdgst": false, 00:37:27.315 "ddgst": false 00:37:27.315 } 00:37:27.315 }, 00:37:27.315 { 00:37:27.315 "method": "bdev_nvme_set_hotplug", 00:37:27.315 "params": { 00:37:27.315 "period_us": 100000, 00:37:27.315 "enable": false 00:37:27.315 } 00:37:27.315 }, 00:37:27.315 { 00:37:27.315 "method": "bdev_wait_for_examine" 00:37:27.315 } 00:37:27.315 ] 00:37:27.315 }, 00:37:27.315 { 00:37:27.315 "subsystem": "nbd", 00:37:27.315 "config": [] 00:37:27.315 } 00:37:27.315 ] 00:37:27.315 }' 00:37:27.315 09:47:11 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:27.315 09:47:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:27.315 [2024-07-14 09:47:11.692171] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:37:27.315 [2024-07-14 09:47:11.692254] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid926199 ] 00:37:27.315 EAL: No free 2048 kB hugepages reported on node 1 00:37:27.315 [2024-07-14 09:47:11.750204] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:27.573 [2024-07-14 09:47:11.840358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:27.573 [2024-07-14 09:47:12.021073] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:28.507 09:47:12 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:28.507 09:47:12 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:37:28.507 09:47:12 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:37:28.507 09:47:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:28.507 09:47:12 keyring_file -- keyring/file.sh@120 -- # jq length 00:37:28.507 09:47:12 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:37:28.507 09:47:12 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:37:28.507 09:47:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:28.507 09:47:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:28.507 09:47:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:28.507 09:47:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:28.507 09:47:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:28.765 09:47:13 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:37:28.765 09:47:13 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:37:28.765 09:47:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:28.765 09:47:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:28.765 09:47:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:28.765 09:47:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:28.765 09:47:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:29.022 09:47:13 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:37:29.022 09:47:13 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:37:29.022 09:47:13 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:37:29.022 09:47:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:37:29.280 09:47:13 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:37:29.280 09:47:13 keyring_file -- keyring/file.sh@1 -- # cleanup 00:37:29.280 09:47:13 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.Khi789DA20 /tmp/tmp.ajnDJazEpN 00:37:29.280 09:47:13 keyring_file -- keyring/file.sh@20 -- # killprocess 926199 00:37:29.280 09:47:13 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 926199 ']' 00:37:29.280 09:47:13 keyring_file -- common/autotest_common.sh@952 -- # kill -0 926199 00:37:29.280 09:47:13 keyring_file -- common/autotest_common.sh@953 -- # uname 00:37:29.280 09:47:13 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:29.280 09:47:13 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 926199 00:37:29.280 09:47:13 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:29.280 09:47:13 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:29.280 09:47:13 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 926199' 00:37:29.280 killing process with pid 926199 00:37:29.280 09:47:13 keyring_file -- common/autotest_common.sh@967 -- # kill 926199 00:37:29.280 Received shutdown signal, test time was about 1.000000 seconds 00:37:29.280 00:37:29.280 Latency(us) 00:37:29.280 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:29.280 =================================================================================================================== 00:37:29.280 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:29.280 09:47:13 keyring_file -- common/autotest_common.sh@972 -- # wait 926199 00:37:29.537 09:47:13 keyring_file -- keyring/file.sh@21 -- # killprocess 924733 00:37:29.537 09:47:13 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 924733 ']' 00:37:29.537 09:47:13 keyring_file -- common/autotest_common.sh@952 -- # kill -0 924733 00:37:29.537 09:47:13 keyring_file -- common/autotest_common.sh@953 -- # uname 00:37:29.537 09:47:13 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:29.537 09:47:13 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 924733 00:37:29.537 09:47:13 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:29.537 09:47:13 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:29.537 09:47:13 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 924733' 00:37:29.537 killing process with pid 924733 00:37:29.537 09:47:13 keyring_file -- common/autotest_common.sh@967 -- # kill 924733 00:37:29.537 [2024-07-14 09:47:13.914845] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:37:29.537 09:47:13 keyring_file -- common/autotest_common.sh@972 -- # wait 924733 00:37:30.103 00:37:30.103 real 0m14.113s 00:37:30.103 user 0m34.677s 00:37:30.103 sys 0m3.230s 00:37:30.103 09:47:14 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:30.103 09:47:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:30.103 ************************************ 00:37:30.103 END TEST keyring_file 00:37:30.103 ************************************ 00:37:30.103 09:47:14 -- common/autotest_common.sh@1142 -- # return 0 00:37:30.103 09:47:14 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:37:30.103 09:47:14 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:30.103 09:47:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:30.103 09:47:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:30.103 09:47:14 -- common/autotest_common.sh@10 -- # set +x 00:37:30.103 ************************************ 00:37:30.103 START TEST keyring_linux 00:37:30.103 ************************************ 00:37:30.103 09:47:14 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:30.103 * Looking for test storage... 00:37:30.103 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:30.103 09:47:14 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:30.103 09:47:14 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:30.104 09:47:14 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:37:30.104 09:47:14 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:30.104 09:47:14 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:30.104 09:47:14 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:30.104 09:47:14 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:30.104 09:47:14 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:30.104 09:47:14 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:30.104 09:47:14 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:30.104 09:47:14 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:30.104 09:47:14 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:30.104 09:47:14 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:30.104 09:47:14 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:30.104 09:47:14 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:30.104 09:47:14 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:30.104 09:47:14 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:30.104 09:47:14 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:30.104 09:47:14 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:30.104 09:47:14 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:30.104 09:47:14 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:30.104 09:47:14 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:30.104 09:47:14 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:30.104 09:47:14 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:30.104 09:47:14 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:30.104 09:47:14 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:30.104 09:47:14 keyring_linux -- paths/export.sh@5 -- # export PATH 00:37:30.104 09:47:14 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:30.104 09:47:14 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:37:30.104 09:47:14 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:30.104 09:47:14 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:30.104 09:47:14 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:30.104 09:47:14 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:30.104 09:47:14 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:30.104 09:47:14 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:30.104 09:47:14 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:30.104 09:47:14 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:30.104 09:47:14 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:30.104 09:47:14 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:30.104 09:47:14 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:30.104 09:47:14 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:37:30.104 09:47:14 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:37:30.104 09:47:14 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:37:30.104 09:47:14 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:37:30.104 09:47:14 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:30.104 09:47:14 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:37:30.104 09:47:14 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:30.104 09:47:14 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:30.104 09:47:14 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:37:30.104 09:47:14 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:30.104 09:47:14 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:30.104 09:47:14 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:37:30.104 09:47:14 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:30.104 09:47:14 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:30.104 09:47:14 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:37:30.104 09:47:14 keyring_linux -- nvmf/common.sh@705 -- # python - 00:37:30.104 09:47:14 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:37:30.104 09:47:14 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:37:30.104 /tmp/:spdk-test:key0 00:37:30.104 09:47:14 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:37:30.104 09:47:14 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:30.104 09:47:14 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:37:30.104 09:47:14 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:30.104 09:47:14 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:30.104 09:47:14 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:37:30.104 09:47:14 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:30.104 09:47:14 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:30.104 09:47:14 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:37:30.104 09:47:14 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:30.104 09:47:14 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:37:30.104 09:47:14 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:37:30.104 09:47:14 keyring_linux -- nvmf/common.sh@705 -- # python - 00:37:30.104 09:47:14 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:37:30.104 09:47:14 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:37:30.104 /tmp/:spdk-test:key1 00:37:30.104 09:47:14 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=926565 00:37:30.104 09:47:14 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:30.104 09:47:14 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 926565 00:37:30.104 09:47:14 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 926565 ']' 00:37:30.104 09:47:14 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:30.104 09:47:14 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:30.104 09:47:14 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:30.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:30.104 09:47:14 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:30.104 09:47:14 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:30.361 [2024-07-14 09:47:14.571541] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:37:30.362 [2024-07-14 09:47:14.571640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid926565 ] 00:37:30.362 EAL: No free 2048 kB hugepages reported on node 1 00:37:30.362 [2024-07-14 09:47:14.632876] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:30.362 [2024-07-14 09:47:14.717841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:30.619 09:47:14 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:30.619 09:47:14 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:37:30.619 09:47:14 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:37:30.619 09:47:14 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:30.619 09:47:14 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:30.619 [2024-07-14 09:47:14.967182] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:30.619 null0 00:37:30.619 [2024-07-14 09:47:14.999225] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:30.619 [2024-07-14 09:47:14.999638] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:30.619 09:47:15 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:30.619 09:47:15 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:37:30.619 189917233 00:37:30.619 09:47:15 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:37:30.619 393751463 00:37:30.619 09:47:15 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=926635 00:37:30.619 09:47:15 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 926635 /var/tmp/bperf.sock 00:37:30.619 09:47:15 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:37:30.619 09:47:15 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 926635 ']' 00:37:30.619 09:47:15 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:30.619 09:47:15 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:30.619 09:47:15 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:30.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:30.619 09:47:15 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:30.619 09:47:15 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:30.877 [2024-07-14 09:47:15.072716] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:37:30.877 [2024-07-14 09:47:15.072797] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid926635 ] 00:37:30.877 EAL: No free 2048 kB hugepages reported on node 1 00:37:30.877 [2024-07-14 09:47:15.134300] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:30.877 [2024-07-14 09:47:15.217151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:30.877 09:47:15 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:30.877 09:47:15 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:37:30.877 09:47:15 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:37:30.877 09:47:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:37:31.135 09:47:15 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:37:31.135 09:47:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:31.701 09:47:15 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:31.701 09:47:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:31.701 [2024-07-14 09:47:16.079404] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:31.959 nvme0n1 00:37:31.959 09:47:16 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:37:31.959 09:47:16 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:37:31.959 09:47:16 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:31.959 09:47:16 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:31.959 09:47:16 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:31.959 09:47:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:32.217 09:47:16 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:37:32.217 09:47:16 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:32.217 09:47:16 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:37:32.217 09:47:16 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:37:32.217 09:47:16 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:32.217 09:47:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:32.217 09:47:16 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:37:32.475 09:47:16 keyring_linux -- keyring/linux.sh@25 -- # sn=189917233 00:37:32.475 09:47:16 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:37:32.475 09:47:16 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:32.475 09:47:16 keyring_linux -- keyring/linux.sh@26 -- # [[ 189917233 == \1\8\9\9\1\7\2\3\3 ]] 00:37:32.475 09:47:16 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 189917233 00:37:32.475 09:47:16 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:37:32.475 09:47:16 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:32.475 Running I/O for 1 seconds... 00:37:33.409 00:37:33.409 Latency(us) 00:37:33.409 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:33.409 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:33.409 nvme0n1 : 1.03 3128.23 12.22 0.00 0.00 40352.41 10631.40 53205.52 00:37:33.409 =================================================================================================================== 00:37:33.409 Total : 3128.23 12.22 0.00 0.00 40352.41 10631.40 53205.52 00:37:33.409 0 00:37:33.667 09:47:17 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:33.667 09:47:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:33.925 09:47:18 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:37:33.925 09:47:18 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:37:33.925 09:47:18 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:33.925 09:47:18 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:33.925 09:47:18 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:33.925 09:47:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:33.925 09:47:18 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:37:33.925 09:47:18 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:33.925 09:47:18 keyring_linux -- keyring/linux.sh@23 -- # return 00:37:33.925 09:47:18 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:33.925 09:47:18 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:37:33.925 09:47:18 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:33.925 09:47:18 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:33.925 09:47:18 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:33.925 09:47:18 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:33.925 09:47:18 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:33.925 09:47:18 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:33.925 09:47:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:34.183 [2024-07-14 09:47:18.620068] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:34.183 [2024-07-14 09:47:18.620289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1539680 (107): Transport endpoint is not connected 00:37:34.183 [2024-07-14 09:47:18.621280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1539680 (9): Bad file descriptor 00:37:34.183 [2024-07-14 09:47:18.622279] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:34.183 [2024-07-14 09:47:18.622298] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:34.183 [2024-07-14 09:47:18.622312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:34.183 request: 00:37:34.183 { 00:37:34.183 "name": "nvme0", 00:37:34.183 "trtype": "tcp", 00:37:34.183 "traddr": "127.0.0.1", 00:37:34.183 "adrfam": "ipv4", 00:37:34.183 "trsvcid": "4420", 00:37:34.183 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:34.183 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:34.183 "prchk_reftag": false, 00:37:34.183 "prchk_guard": false, 00:37:34.183 "hdgst": false, 00:37:34.183 "ddgst": false, 00:37:34.183 "psk": ":spdk-test:key1", 00:37:34.183 "method": "bdev_nvme_attach_controller", 00:37:34.183 "req_id": 1 00:37:34.183 } 00:37:34.183 Got JSON-RPC error response 00:37:34.183 response: 00:37:34.183 { 00:37:34.183 "code": -5, 00:37:34.183 "message": "Input/output error" 00:37:34.183 } 00:37:34.442 09:47:18 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:37:34.442 09:47:18 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:34.442 09:47:18 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:34.442 09:47:18 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:34.442 09:47:18 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:37:34.442 09:47:18 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:34.442 09:47:18 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:37:34.442 09:47:18 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:37:34.442 09:47:18 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:37:34.442 09:47:18 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:34.442 09:47:18 keyring_linux -- keyring/linux.sh@33 -- # sn=189917233 00:37:34.442 09:47:18 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 189917233 00:37:34.442 1 links removed 00:37:34.442 09:47:18 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:34.442 09:47:18 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:37:34.442 09:47:18 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:37:34.442 09:47:18 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:37:34.442 09:47:18 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:37:34.442 09:47:18 keyring_linux -- keyring/linux.sh@33 -- # sn=393751463 00:37:34.442 09:47:18 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 393751463 00:37:34.442 1 links removed 00:37:34.442 09:47:18 keyring_linux -- keyring/linux.sh@41 -- # killprocess 926635 00:37:34.442 09:47:18 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 926635 ']' 00:37:34.442 09:47:18 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 926635 00:37:34.442 09:47:18 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:37:34.442 09:47:18 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:34.442 09:47:18 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 926635 00:37:34.442 09:47:18 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:34.442 09:47:18 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:34.442 09:47:18 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 926635' 00:37:34.442 killing process with pid 926635 00:37:34.442 09:47:18 keyring_linux -- common/autotest_common.sh@967 -- # kill 926635 00:37:34.442 Received shutdown signal, test time was about 1.000000 seconds 00:37:34.442 00:37:34.442 Latency(us) 00:37:34.442 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:34.442 =================================================================================================================== 00:37:34.442 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:34.442 09:47:18 keyring_linux -- common/autotest_common.sh@972 -- # wait 926635 00:37:34.701 09:47:18 keyring_linux -- keyring/linux.sh@42 -- # killprocess 926565 00:37:34.701 09:47:18 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 926565 ']' 00:37:34.701 09:47:18 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 926565 00:37:34.701 09:47:18 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:37:34.701 09:47:18 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:34.701 09:47:18 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 926565 00:37:34.701 09:47:18 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:34.701 09:47:18 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:34.701 09:47:18 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 926565' 00:37:34.701 killing process with pid 926565 00:37:34.701 09:47:18 keyring_linux -- common/autotest_common.sh@967 -- # kill 926565 00:37:34.701 09:47:18 keyring_linux -- common/autotest_common.sh@972 -- # wait 926565 00:37:34.959 00:37:34.959 real 0m4.951s 00:37:34.959 user 0m9.274s 00:37:34.959 sys 0m1.477s 00:37:34.959 09:47:19 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:34.959 09:47:19 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:34.959 ************************************ 00:37:34.959 END TEST keyring_linux 00:37:34.959 ************************************ 00:37:34.959 09:47:19 -- common/autotest_common.sh@1142 -- # return 0 00:37:34.959 09:47:19 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:37:34.959 09:47:19 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:37:34.959 09:47:19 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:37:34.959 09:47:19 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:37:34.959 09:47:19 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:37:34.959 09:47:19 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:37:34.959 09:47:19 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:37:34.959 09:47:19 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:37:34.959 09:47:19 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:37:34.959 09:47:19 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:37:34.959 09:47:19 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:37:34.959 09:47:19 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:37:34.959 09:47:19 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:37:34.959 09:47:19 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:37:34.959 09:47:19 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:37:34.959 09:47:19 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:37:34.959 09:47:19 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:37:34.959 09:47:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:34.959 09:47:19 -- common/autotest_common.sh@10 -- # set +x 00:37:34.959 09:47:19 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:37:34.959 09:47:19 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:37:34.959 09:47:19 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:37:34.959 09:47:19 -- common/autotest_common.sh@10 -- # set +x 00:37:36.901 INFO: APP EXITING 00:37:36.901 INFO: killing all VMs 00:37:36.901 INFO: killing vhost app 00:37:36.901 INFO: EXIT DONE 00:37:37.835 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:37:37.835 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:37:37.835 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:37:37.835 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:37:37.835 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:37:37.835 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:37:37.835 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:37:37.835 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:37:37.835 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:37:37.835 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:37:38.092 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:37:38.092 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:37:38.092 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:37:38.092 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:37:38.092 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:37:38.092 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:37:38.092 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:37:39.465 Cleaning 00:37:39.465 Removing: /var/run/dpdk/spdk0/config 00:37:39.465 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:39.465 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:39.465 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:39.465 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:39.465 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:37:39.465 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:37:39.465 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:37:39.465 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:37:39.465 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:39.465 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:39.465 Removing: /var/run/dpdk/spdk1/config 00:37:39.465 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:39.465 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:39.465 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:39.465 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:39.465 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:37:39.465 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:37:39.465 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:37:39.465 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:37:39.465 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:39.465 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:39.465 Removing: /var/run/dpdk/spdk1/mp_socket 00:37:39.465 Removing: /var/run/dpdk/spdk2/config 00:37:39.465 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:39.465 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:39.465 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:39.465 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:39.465 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:37:39.465 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:37:39.465 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:37:39.465 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:37:39.465 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:39.465 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:39.465 Removing: /var/run/dpdk/spdk3/config 00:37:39.465 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:39.465 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:39.465 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:39.465 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:39.465 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:37:39.465 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:37:39.465 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:37:39.465 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:37:39.465 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:39.465 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:39.465 Removing: /var/run/dpdk/spdk4/config 00:37:39.465 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:39.465 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:39.465 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:39.465 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:39.465 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:37:39.465 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:37:39.465 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:37:39.465 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:37:39.465 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:39.465 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:39.465 Removing: /dev/shm/bdev_svc_trace.1 00:37:39.465 Removing: /dev/shm/nvmf_trace.0 00:37:39.465 Removing: /dev/shm/spdk_tgt_trace.pid606630 00:37:39.465 Removing: /var/run/dpdk/spdk0 00:37:39.465 Removing: /var/run/dpdk/spdk1 00:37:39.465 Removing: /var/run/dpdk/spdk2 00:37:39.465 Removing: /var/run/dpdk/spdk3 00:37:39.465 Removing: /var/run/dpdk/spdk4 00:37:39.465 Removing: /var/run/dpdk/spdk_pid605086 00:37:39.465 Removing: /var/run/dpdk/spdk_pid605816 00:37:39.465 Removing: /var/run/dpdk/spdk_pid606630 00:37:39.465 Removing: /var/run/dpdk/spdk_pid607067 00:37:39.465 Removing: /var/run/dpdk/spdk_pid607762 00:37:39.465 Removing: /var/run/dpdk/spdk_pid607904 00:37:39.465 Removing: /var/run/dpdk/spdk_pid608614 00:37:39.465 Removing: /var/run/dpdk/spdk_pid608627 00:37:39.465 Removing: /var/run/dpdk/spdk_pid608869 00:37:39.465 Removing: /var/run/dpdk/spdk_pid610197 00:37:39.465 Removing: /var/run/dpdk/spdk_pid611117 00:37:39.465 Removing: /var/run/dpdk/spdk_pid611426 00:37:39.465 Removing: /var/run/dpdk/spdk_pid611612 00:37:39.465 Removing: /var/run/dpdk/spdk_pid611820 00:37:39.465 Removing: /var/run/dpdk/spdk_pid612008 00:37:39.465 Removing: /var/run/dpdk/spdk_pid612165 00:37:39.465 Removing: /var/run/dpdk/spdk_pid612322 00:37:39.465 Removing: /var/run/dpdk/spdk_pid612516 00:37:39.465 Removing: /var/run/dpdk/spdk_pid612830 00:37:39.465 Removing: /var/run/dpdk/spdk_pid615177 00:37:39.465 Removing: /var/run/dpdk/spdk_pid615367 00:37:39.465 Removing: /var/run/dpdk/spdk_pid615622 00:37:39.465 Removing: /var/run/dpdk/spdk_pid615632 00:37:39.465 Removing: /var/run/dpdk/spdk_pid615937 00:37:39.465 Removing: /var/run/dpdk/spdk_pid616066 00:37:39.465 Removing: /var/run/dpdk/spdk_pid616380 00:37:39.465 Removing: /var/run/dpdk/spdk_pid616503 00:37:39.465 Removing: /var/run/dpdk/spdk_pid616677 00:37:39.465 Removing: /var/run/dpdk/spdk_pid616798 00:37:39.465 Removing: /var/run/dpdk/spdk_pid616964 00:37:39.465 Removing: /var/run/dpdk/spdk_pid616978 00:37:39.465 Removing: /var/run/dpdk/spdk_pid617347 00:37:39.465 Removing: /var/run/dpdk/spdk_pid617499 00:37:39.465 Removing: /var/run/dpdk/spdk_pid617811 00:37:39.465 Removing: /var/run/dpdk/spdk_pid617931 00:37:39.465 Removing: /var/run/dpdk/spdk_pid618007 00:37:39.465 Removing: /var/run/dpdk/spdk_pid618077 00:37:39.465 Removing: /var/run/dpdk/spdk_pid618349 00:37:39.465 Removing: /var/run/dpdk/spdk_pid618509 00:37:39.465 Removing: /var/run/dpdk/spdk_pid618667 00:37:39.465 Removing: /var/run/dpdk/spdk_pid618889 00:37:39.465 Removing: /var/run/dpdk/spdk_pid619094 00:37:39.465 Removing: /var/run/dpdk/spdk_pid619252 00:37:39.465 Removing: /var/run/dpdk/spdk_pid619412 00:37:39.465 Removing: /var/run/dpdk/spdk_pid619680 00:37:39.465 Removing: /var/run/dpdk/spdk_pid619841 00:37:39.465 Removing: /var/run/dpdk/spdk_pid620001 00:37:39.465 Removing: /var/run/dpdk/spdk_pid620160 00:37:39.465 Removing: /var/run/dpdk/spdk_pid620428 00:37:39.465 Removing: /var/run/dpdk/spdk_pid620583 00:37:39.465 Removing: /var/run/dpdk/spdk_pid620748 00:37:39.465 Removing: /var/run/dpdk/spdk_pid620911 00:37:39.465 Removing: /var/run/dpdk/spdk_pid621173 00:37:39.465 Removing: /var/run/dpdk/spdk_pid621332 00:37:39.465 Removing: /var/run/dpdk/spdk_pid621499 00:37:39.465 Removing: /var/run/dpdk/spdk_pid621765 00:37:39.465 Removing: /var/run/dpdk/spdk_pid621929 00:37:39.465 Removing: /var/run/dpdk/spdk_pid622000 00:37:39.465 Removing: /var/run/dpdk/spdk_pid622217 00:37:39.465 Removing: /var/run/dpdk/spdk_pid624372 00:37:39.465 Removing: /var/run/dpdk/spdk_pid677818 00:37:39.465 Removing: /var/run/dpdk/spdk_pid680422 00:37:39.465 Removing: /var/run/dpdk/spdk_pid687261 00:37:39.465 Removing: /var/run/dpdk/spdk_pid691164 00:37:39.465 Removing: /var/run/dpdk/spdk_pid693640 00:37:39.465 Removing: /var/run/dpdk/spdk_pid694042 00:37:39.465 Removing: /var/run/dpdk/spdk_pid698002 00:37:39.465 Removing: /var/run/dpdk/spdk_pid701792 00:37:39.465 Removing: /var/run/dpdk/spdk_pid701835 00:37:39.465 Removing: /var/run/dpdk/spdk_pid702376 00:37:39.465 Removing: /var/run/dpdk/spdk_pid703034 00:37:39.465 Removing: /var/run/dpdk/spdk_pid703688 00:37:39.465 Removing: /var/run/dpdk/spdk_pid704039 00:37:39.465 Removing: /var/run/dpdk/spdk_pid704088 00:37:39.465 Removing: /var/run/dpdk/spdk_pid704233 00:37:39.723 Removing: /var/run/dpdk/spdk_pid704369 00:37:39.723 Removing: /var/run/dpdk/spdk_pid704374 00:37:39.723 Removing: /var/run/dpdk/spdk_pid705023 00:37:39.723 Removing: /var/run/dpdk/spdk_pid705607 00:37:39.723 Removing: /var/run/dpdk/spdk_pid706227 00:37:39.723 Removing: /var/run/dpdk/spdk_pid706623 00:37:39.723 Removing: /var/run/dpdk/spdk_pid706717 00:37:39.723 Removing: /var/run/dpdk/spdk_pid706884 00:37:39.723 Removing: /var/run/dpdk/spdk_pid707768 00:37:39.723 Removing: /var/run/dpdk/spdk_pid708486 00:37:39.723 Removing: /var/run/dpdk/spdk_pid713831 00:37:39.723 Removing: /var/run/dpdk/spdk_pid714001 00:37:39.723 Removing: /var/run/dpdk/spdk_pid716605 00:37:39.723 Removing: /var/run/dpdk/spdk_pid720376 00:37:39.723 Removing: /var/run/dpdk/spdk_pid722974 00:37:39.723 Removing: /var/run/dpdk/spdk_pid729342 00:37:39.723 Removing: /var/run/dpdk/spdk_pid734480 00:37:39.723 Removing: /var/run/dpdk/spdk_pid735725 00:37:39.723 Removing: /var/run/dpdk/spdk_pid736392 00:37:39.723 Removing: /var/run/dpdk/spdk_pid746461 00:37:39.723 Removing: /var/run/dpdk/spdk_pid748664 00:37:39.723 Removing: /var/run/dpdk/spdk_pid773906 00:37:39.723 Removing: /var/run/dpdk/spdk_pid776681 00:37:39.723 Removing: /var/run/dpdk/spdk_pid777861 00:37:39.723 Removing: /var/run/dpdk/spdk_pid779284 00:37:39.723 Removing: /var/run/dpdk/spdk_pid779308 00:37:39.723 Removing: /var/run/dpdk/spdk_pid779444 00:37:39.723 Removing: /var/run/dpdk/spdk_pid779571 00:37:39.723 Removing: /var/run/dpdk/spdk_pid780366 00:37:39.723 Removing: /var/run/dpdk/spdk_pid781706 00:37:39.723 Removing: /var/run/dpdk/spdk_pid782435 00:37:39.723 Removing: /var/run/dpdk/spdk_pid782738 00:37:39.723 Removing: /var/run/dpdk/spdk_pid784365 00:37:39.723 Removing: /var/run/dpdk/spdk_pid784792 00:37:39.723 Removing: /var/run/dpdk/spdk_pid785346 00:37:39.723 Removing: /var/run/dpdk/spdk_pid787863 00:37:39.723 Removing: /var/run/dpdk/spdk_pid791111 00:37:39.723 Removing: /var/run/dpdk/spdk_pid794636 00:37:39.723 Removing: /var/run/dpdk/spdk_pid817989 00:37:39.723 Removing: /var/run/dpdk/spdk_pid820746 00:37:39.723 Removing: /var/run/dpdk/spdk_pid824516 00:37:39.723 Removing: /var/run/dpdk/spdk_pid825452 00:37:39.723 Removing: /var/run/dpdk/spdk_pid826535 00:37:39.723 Removing: /var/run/dpdk/spdk_pid829076 00:37:39.723 Removing: /var/run/dpdk/spdk_pid831427 00:37:39.723 Removing: /var/run/dpdk/spdk_pid835509 00:37:39.723 Removing: /var/run/dpdk/spdk_pid835545 00:37:39.723 Removing: /var/run/dpdk/spdk_pid838296 00:37:39.723 Removing: /var/run/dpdk/spdk_pid838542 00:37:39.723 Removing: /var/run/dpdk/spdk_pid838685 00:37:39.723 Removing: /var/run/dpdk/spdk_pid838955 00:37:39.723 Removing: /var/run/dpdk/spdk_pid838963 00:37:39.723 Removing: /var/run/dpdk/spdk_pid840103 00:37:39.723 Removing: /var/run/dpdk/spdk_pid841808 00:37:39.723 Removing: /var/run/dpdk/spdk_pid843126 00:37:39.723 Removing: /var/run/dpdk/spdk_pid844304 00:37:39.723 Removing: /var/run/dpdk/spdk_pid845480 00:37:39.723 Removing: /var/run/dpdk/spdk_pid846663 00:37:39.723 Removing: /var/run/dpdk/spdk_pid850461 00:37:39.723 Removing: /var/run/dpdk/spdk_pid850795 00:37:39.723 Removing: /var/run/dpdk/spdk_pid852164 00:37:39.723 Removing: /var/run/dpdk/spdk_pid852927 00:37:39.723 Removing: /var/run/dpdk/spdk_pid856516 00:37:39.723 Removing: /var/run/dpdk/spdk_pid858485 00:37:39.723 Removing: /var/run/dpdk/spdk_pid861768 00:37:39.723 Removing: /var/run/dpdk/spdk_pid865077 00:37:39.723 Removing: /var/run/dpdk/spdk_pid871915 00:37:39.723 Removing: /var/run/dpdk/spdk_pid876379 00:37:39.723 Removing: /var/run/dpdk/spdk_pid876382 00:37:39.723 Removing: /var/run/dpdk/spdk_pid888577 00:37:39.723 Removing: /var/run/dpdk/spdk_pid888988 00:37:39.723 Removing: /var/run/dpdk/spdk_pid889397 00:37:39.723 Removing: /var/run/dpdk/spdk_pid889803 00:37:39.723 Removing: /var/run/dpdk/spdk_pid890378 00:37:39.723 Removing: /var/run/dpdk/spdk_pid890791 00:37:39.723 Removing: /var/run/dpdk/spdk_pid891306 00:37:39.723 Removing: /var/run/dpdk/spdk_pid891718 00:37:39.723 Removing: /var/run/dpdk/spdk_pid894098 00:37:39.723 Removing: /var/run/dpdk/spdk_pid894349 00:37:39.723 Removing: /var/run/dpdk/spdk_pid898134 00:37:39.723 Removing: /var/run/dpdk/spdk_pid898189 00:37:39.723 Removing: /var/run/dpdk/spdk_pid899910 00:37:39.723 Removing: /var/run/dpdk/spdk_pid905429 00:37:39.723 Removing: /var/run/dpdk/spdk_pid905435 00:37:39.723 Removing: /var/run/dpdk/spdk_pid908303 00:37:39.723 Removing: /var/run/dpdk/spdk_pid909611 00:37:39.723 Removing: /var/run/dpdk/spdk_pid911007 00:37:39.723 Removing: /var/run/dpdk/spdk_pid911863 00:37:39.723 Removing: /var/run/dpdk/spdk_pid913152 00:37:39.723 Removing: /var/run/dpdk/spdk_pid914019 00:37:39.723 Removing: /var/run/dpdk/spdk_pid919284 00:37:39.723 Removing: /var/run/dpdk/spdk_pid919683 00:37:39.723 Removing: /var/run/dpdk/spdk_pid920068 00:37:39.723 Removing: /var/run/dpdk/spdk_pid921621 00:37:39.723 Removing: /var/run/dpdk/spdk_pid921900 00:37:39.723 Removing: /var/run/dpdk/spdk_pid922295 00:37:39.723 Removing: /var/run/dpdk/spdk_pid924733 00:37:39.723 Removing: /var/run/dpdk/spdk_pid924752 00:37:39.723 Removing: /var/run/dpdk/spdk_pid926199 00:37:39.723 Removing: /var/run/dpdk/spdk_pid926565 00:37:39.723 Removing: /var/run/dpdk/spdk_pid926635 00:37:39.980 Clean 00:37:39.980 09:47:24 -- common/autotest_common.sh@1451 -- # return 0 00:37:39.980 09:47:24 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:37:39.980 09:47:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:39.980 09:47:24 -- common/autotest_common.sh@10 -- # set +x 00:37:39.980 09:47:24 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:37:39.980 09:47:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:39.980 09:47:24 -- common/autotest_common.sh@10 -- # set +x 00:37:39.980 09:47:24 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:39.980 09:47:24 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:37:39.980 09:47:24 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:37:39.980 09:47:24 -- spdk/autotest.sh@391 -- # hash lcov 00:37:39.980 09:47:24 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:37:39.980 09:47:24 -- spdk/autotest.sh@393 -- # hostname 00:37:39.980 09:47:24 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:37:40.237 geninfo: WARNING: invalid characters removed from testname! 00:38:12.307 09:47:51 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:12.307 09:47:55 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:14.206 09:47:58 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:17.491 09:48:01 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:20.797 09:48:04 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:23.329 09:48:07 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:26.615 09:48:10 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:26.615 09:48:10 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:26.615 09:48:10 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:38:26.615 09:48:10 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:26.615 09:48:10 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:26.615 09:48:10 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:26.615 09:48:10 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:26.615 09:48:10 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:26.615 09:48:10 -- paths/export.sh@5 -- $ export PATH 00:38:26.615 09:48:10 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:26.615 09:48:10 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:38:26.615 09:48:10 -- common/autobuild_common.sh@444 -- $ date +%s 00:38:26.615 09:48:10 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720943290.XXXXXX 00:38:26.615 09:48:10 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720943290.VVAQBh 00:38:26.615 09:48:10 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:38:26.615 09:48:10 -- common/autobuild_common.sh@450 -- $ '[' -n v22.11.4 ']' 00:38:26.615 09:48:10 -- common/autobuild_common.sh@451 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:38:26.615 09:48:10 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:38:26.615 09:48:10 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:38:26.615 09:48:10 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:38:26.615 09:48:10 -- common/autobuild_common.sh@460 -- $ get_config_params 00:38:26.615 09:48:10 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:38:26.615 09:48:10 -- common/autotest_common.sh@10 -- $ set +x 00:38:26.615 09:48:10 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:38:26.615 09:48:10 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:38:26.615 09:48:10 -- pm/common@17 -- $ local monitor 00:38:26.615 09:48:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:26.615 09:48:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:26.615 09:48:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:26.615 09:48:10 -- pm/common@21 -- $ date +%s 00:38:26.615 09:48:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:26.615 09:48:10 -- pm/common@21 -- $ date +%s 00:38:26.615 09:48:10 -- pm/common@25 -- $ sleep 1 00:38:26.615 09:48:10 -- pm/common@21 -- $ date +%s 00:38:26.615 09:48:10 -- pm/common@21 -- $ date +%s 00:38:26.615 09:48:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720943290 00:38:26.615 09:48:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720943290 00:38:26.615 09:48:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720943290 00:38:26.615 09:48:10 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720943290 00:38:26.615 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720943290_collect-vmstat.pm.log 00:38:26.615 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720943290_collect-cpu-load.pm.log 00:38:26.615 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720943290_collect-cpu-temp.pm.log 00:38:26.615 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720943290_collect-bmc-pm.bmc.pm.log 00:38:27.182 09:48:11 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:38:27.182 09:48:11 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:38:27.182 09:48:11 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:27.182 09:48:11 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:38:27.182 09:48:11 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:38:27.182 09:48:11 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:38:27.182 09:48:11 -- spdk/autopackage.sh@19 -- $ timing_finish 00:38:27.182 09:48:11 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:27.182 09:48:11 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:38:27.182 09:48:11 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:27.182 09:48:11 -- spdk/autopackage.sh@20 -- $ exit 0 00:38:27.182 09:48:11 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:38:27.182 09:48:11 -- pm/common@29 -- $ signal_monitor_resources TERM 00:38:27.182 09:48:11 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:38:27.183 09:48:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:27.183 09:48:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:38:27.183 09:48:11 -- pm/common@44 -- $ pid=938579 00:38:27.183 09:48:11 -- pm/common@50 -- $ kill -TERM 938579 00:38:27.183 09:48:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:27.183 09:48:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:38:27.183 09:48:11 -- pm/common@44 -- $ pid=938581 00:38:27.183 09:48:11 -- pm/common@50 -- $ kill -TERM 938581 00:38:27.183 09:48:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:27.183 09:48:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:38:27.183 09:48:11 -- pm/common@44 -- $ pid=938583 00:38:27.183 09:48:11 -- pm/common@50 -- $ kill -TERM 938583 00:38:27.183 09:48:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:27.183 09:48:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:38:27.183 09:48:11 -- pm/common@44 -- $ pid=938611 00:38:27.183 09:48:11 -- pm/common@50 -- $ sudo -E kill -TERM 938611 00:38:27.183 + [[ -n 500743 ]] 00:38:27.183 + sudo kill 500743 00:38:27.192 [Pipeline] } 00:38:27.210 [Pipeline] // stage 00:38:27.217 [Pipeline] } 00:38:27.234 [Pipeline] // timeout 00:38:27.239 [Pipeline] } 00:38:27.256 [Pipeline] // catchError 00:38:27.261 [Pipeline] } 00:38:27.279 [Pipeline] // wrap 00:38:27.286 [Pipeline] } 00:38:27.301 [Pipeline] // catchError 00:38:27.312 [Pipeline] stage 00:38:27.315 [Pipeline] { (Epilogue) 00:38:27.332 [Pipeline] catchError 00:38:27.334 [Pipeline] { 00:38:27.349 [Pipeline] echo 00:38:27.351 Cleanup processes 00:38:27.357 [Pipeline] sh 00:38:27.639 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:27.639 938724 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:38:27.639 938843 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:27.655 [Pipeline] sh 00:38:27.936 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:27.936 ++ grep -v 'sudo pgrep' 00:38:27.936 ++ awk '{print $1}' 00:38:27.936 + sudo kill -9 938724 00:38:27.947 [Pipeline] sh 00:38:28.227 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:40.430 [Pipeline] sh 00:38:40.736 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:40.736 Artifacts sizes are good 00:38:40.752 [Pipeline] archiveArtifacts 00:38:40.759 Archiving artifacts 00:38:40.968 [Pipeline] sh 00:38:41.272 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:38:41.286 [Pipeline] cleanWs 00:38:41.295 [WS-CLEANUP] Deleting project workspace... 00:38:41.295 [WS-CLEANUP] Deferred wipeout is used... 00:38:41.302 [WS-CLEANUP] done 00:38:41.303 [Pipeline] } 00:38:41.324 [Pipeline] // catchError 00:38:41.336 [Pipeline] sh 00:38:41.615 + logger -p user.info -t JENKINS-CI 00:38:41.624 [Pipeline] } 00:38:41.641 [Pipeline] // stage 00:38:41.647 [Pipeline] } 00:38:41.665 [Pipeline] // node 00:38:41.671 [Pipeline] End of Pipeline 00:38:41.710 Finished: SUCCESS